Deploy a custom inference workload

This section explains how to create a custom inference workload via the Run:ai UI.

An inference workload provides the setup and configuration needed to deploy your trained model for real-time or batch predictions. It includes specifications for the container image, data sets, network settings, and resource requests required to serve your models.

The inference workload is assigned to a project and is affected by the project’s quota.

To learn more about the inference workload type in NVIDIA Run:ai and determine that it is the most suitable workload type for your goals, see Workload types.

Before you start

  • Make sure you have created a project or have one created for you.

  • Make sure Knative is properly installed by your administrator.

Note

  • Flexible workload submission – Disabled by default. If unavailable, your Administrator must enable it under General Settings → Workloads → Flexible Workload Submission.

  • Inference type - Disabled by default. If unavailable, your Administrator must enable it under General settings → Workloads → Models.

  • GPU memory limit – Disabled by default. If unavailable, your Administrator must enable it under General Settings → Resources → GPU Resource Optimization.

  • Tolerations – Disabled by default. If unavailable, your Administrator must enable it under General Settings → Workloads → Tolerations.

  • Data volumes – Disabled by default. If unavailable, your Administrator must enable it under General Settings → Workloads → Data volumes.

Workload priority class

By default, inference workloads in NVIDIA Run:ai are assigned the inference priority class which is non-preemptible. This behavior ensures that inference workloads, which often serve real-time or latency-sensitive traffic, are guaranteed the resources they need and will not be disrupted by other workloads. For more details, see Workload priority class control.

Submission form options

You can create a new workload using either the Flexible or Original submission form. The Flexible submission form offers greater customization and is the recommended method. Within the Flexible form, you have two options:

  • Load from an existing setup - You can select an existing setup to populate the workload form with predefined values. While the Original submission form also allows you to select an existing setup, with the Flexible submission you can customize any of the populated fields for a one-time configuration. These changes will apply only to this workload and will not modify the original setup. If needed, you can reset the configuration to the original setup at any time.

  • Provide your own settings - Manually fill in the workload configuration fields. This is a one-time setup that applies only to the current workload and will not be saved for future use.

Note

The Original submission form will be deprecated in a future release.

Creating a custom inference workload

  1. To create an inference workload, go to Workload manager → Workloads.

  2. Click +NEW WORKLOAD and select Inference from the drop-down menu.

  3. Within the new form, select the cluster and project. To create a new project, click +NEW PROJECT and refer to Projects for a step-by-step guide.

  4. Select a preconfigured template or select Start from scratch to launch a new workload quickly.

  5. Enter a unique name for the workload. If the name already exists in the project, you will be requested to submit a different name.

  6. Under Submission, select Flexible or Original and click CONTINUE.

Setting up an environment

Load from existing setup

  1. Click the load icon. A side pane appears, displaying a list of available environments. Select an environment from the list.

  2. Optionally, customize any of the environment’s predefined fields as shown below. The changes will apply to this workload only and will not affect the selected environment.

  3. Alternatively, click the âž• icon in the side pane to create a new environment. For step-by-step instructions, see Environments.

Provide your own settings

Manually configure the settings below as needed. The changes will apply to this workload only.

Configure environment

  1. Add the Image URL or update the URL of the existing setup.

  2. Set the condition for pulling the image by selecting the image pull policy. It is recommended to pull the image only if it's not already present on the host.

  3. Set an inference serving endpoint. The connection protocol and the container port are defined within the environment:

    • Select HTTP or gRPC and enter a container port

    • Modify who can access the endpoint:

      • By default, Public is selected giving everyone within the network access to the endpoint with no authentication

      • If you select All authenticated users, access is given to everyone within the organization’s account that can log in (to Run:ai or SSO).

      • For Specific group(s), enter group names as they appear in your identity provider. You must be a member of one of the groups listed to have access to the tool.

      • For Specific user(s), enter a valid email address or username. If you remove yourself, you will lose access to the tool.

  4. Set the connection for your tool(s). If you are loading from existing setup, the tools are configured as part of the environment.

    • Select the connection type - External URL or NodePort:

      • Auto generate - A unique URL / port is automatically created for each workload using the environment.

      • Custom URL / Custom port - Manually define the URL or port. For custom port, make sure to enter a port between 30000 and 32767. If the node port is already in use, the workload will fail and display an error message.

    • Modify who can access the tool:

      • By default, All authenticated users is selected giving access to everyone within the organization’s account.

      • For Specific group(s), enter group names as they appear in your identity provider. You must be a member of one of the groups listed to have access to the tool.

      • For Specific user(s), enter a valid email address or username. If you remove yourself, you will lose access to the tool.

  5. Set the command and arguments for the container running the workload. If no command is added, the container will use the image’s default command (entry-point).

    • Modify the existing command or click +COMMAND & ARGUMENTS to add a new command.

    • Set multiple arguments separated by spaces, using the following format (e.g.: --arg1=val1).

  6. Set the environment variable(s):

    • Modify the existing environment variable(s) if you are loading from an existing setup. The existing environment variables may include instructions to guide you with entering the correct values.

    • To add a new variable, click + ENVIRONMENT VARIABLE.

    • You can either select Custom to define your own variable, or choose from a predefined list of Secrets or ConfigMaps.

  7. Enter a path pointing to the container's working directory

  8. Set where the UID, GID, and supplementary groups for the container should be taken from. If you select Custom, you’ll need to manually enter the UID, GID and Supplementary groups values.

  9. Select additional Linux capabilities for the container from the drop-down menu. This grants certain privileges to a container without granting all the root user's privileges.

Setting up compute resources

Load from existing setup

  1. Click the load icon. A side pane appears, displaying a list of available compute resources. Select a compute resource from the list.

  2. Optionally, customize any of the compute resource's predefined fields as shown below. The changes will apply to this workload only and will not affect the selected compute resource.

  3. Alternatively, click the âž• icon in the side pane to create a new compute resource. For step-by-step instructions, see Compute resources.

Provide your own settings

Manually configure the settings below as needed. The changes will apply to this workload only.

Configure compute resources

  1. Set the number of GPU devices per pod (physical GPUs).

  2. Set the GPU memory per device using either a fraction of a GPU device’s memory (% of device) or a GPU memory unit (MB/GB):

    • Request - The minimum GPU memory allocated per device. Each pod in the workload receives at least this amount per device it uses.

    • Limit - The maximum GPU memory allocated per device. Each pod in the workload receives at most this amount of GPU memory for each device(s) the pod utilizes. This is disabled by default, to enable see Before you start.

  3. Set the CPU compute per pod by choosing the unit (cores or millicores):

    • Request - The minimum amount of CPU compute provisioned per pod. Each running pod receives this amount of CPU compute.

    • Limit - The maximum amount of CPU compute a pod can use. Each pod receives at most this amount of CPU compute. By default, the limit is set to Unlimited which means that the pod may consume all the node's free CPU compute resources.

  4. Set the CPU memory per pod by selecting the unit (MB or GB):

    • Request - The minimum amount of CPU memory provisioned per pod. Each running pod receives this amount of CPU memory.

    • Limit - The maximum amount of CPU memory a pod can use. Each pod receives at most this amount of CPU memory. By default, the limit is set to Unlimited which means that the pod may consume all the node's free CPU memory resources.

  5. Set extended resource(s):

    • Enable Increase shared memory size to allow the shared memory size available to the pod to increase from the default 64MB to the node's total available memory or the CPU memory limit, if set above.

    • Click +EXTENDED RESOURCES to add resource/quantity pairs. For more information on how to set extended resources, see the Extended resources and Quantity guides.

  6. Set the minimum and maximum number of replicas to be scaled up and down to meet the changing demands of inference services:

    • If the number of minimum and maximum replicas are different, autoscaling will be triggered and you'll need to set conditions for creating a new replica. A replica will be created every time a condition is met. When a condition is no longer met after a replica was created, the replica will be automatically deleted to save resources.

    • Select one of the variables to set the conditions for creating a new replica. The variable's values will be monitored via the container's port. When you set a value, this value is the threshold at which autoscaling is triggered.

  7. Set when the replicas should be automatically scaled down to zero. This allows compute resources to be freed up when the model is inactive (i.e., there are no requests being sent). When automatic scaling to zero is enabled, the minimum number of replicas set in the previous step, automatically changes to 0.

  8. Set the order of priority for the node pools on which the Scheduler tries to run the workload. When a workload is created, the Scheduler will try to run it on the first node pool on the list. If the node pool doesn't have free resources, the Scheduler will move on to the next one until it finds one that is available:

    • Drag and drop them to change the order, remove unwanted ones, or reset to the default order defined in the project.

    • Click +NODE POOL to add a new node pool from the list of node pools that were defined on the cluster. To configure a new node pool and for additional information, see Node pools.

  9. Select a node affinity to schedule the workload on a specific node type. If the administrator added a ‘node type (affinity)’ scheduling rule to the project/department, then this field is mandatory. Otherwise, entering a node type (affinity) is optional. Nodes must be tagged with a label that matches the node type key and value.

  10. Click +TOLERATION to allow the workload to be scheduled on a node with a matching taint. Select the operator and the effect:

    • If you select Exists, the effect will be applied if the key exists on the node.

    • If you select Equals, the effect will be applied if the key and the value set match the value on the node.

Setting up data & storage

Note

  • If Data volumes are not enabled, Data & storage appears as Data sources only, and no data volumes will be available. To enable Data volumes, see Before you start.

  • Original - This tab outlines how to set Volumes, Data sources and Data volumes (if applicable).

Load from existing setup

  1. Click the load icon. A side pane appears, displaying a list of available data sources/volumes. Select a data source/volume from the list.

  2. Optionally, customize any of the data source's predefined fields as shown below. The changes will apply to this workload only and will not affect the selected data source.

  3. Alternatively, click the âž• icon in the side pane to create a new data source/volume. For step-by-step instructions, see Data sources or Data volumes.

Provide your own settings

Manually configure the settings below as needed. The changes will apply to this workload only.

Note: Secrets, ConfigMaps and Data volumes cannot be added as a one-time configuration.

Configure data sources

  1. Click the âž• icon and choose the data source from the drop-down menu. You can add multiple data sources.

  2. Once selected, set the data origin according to the required fields and enter the container path to set the data target location. For Git, select a Secret. This option is relevant for private repositories based on existing secrets that were created for the scope.

  3. Select Volume to allocate a storage space to your workload that is persistent across restarts:

    • Set the Storage class to None or select an existing storage class from the list. To add new storage classes, and for additional information, see Kubernetes storage classes. If the administrator defined the storage class configuration, the rest of the fields will appear accordingly.

    • Select one or more access mode(s) and define the claim size and its units.

    • Select the volume mode. If you select Filesystem (default), the volume will be mounted as a filesystem, enabling the usage of directories and files. If you select Block, the volume is exposed as a block storage, which can be formatted or used directly by applications without a filesystem.

    • Set the Container path with the volume target location.

Setting up general settings

Note

The following general settings are optional.

  1. Set annotations(s). Kubernetes annotations are key-value pairs attached to the workload. They are used for storing additional descriptive metadata to enable documentation, monitoring and automation.

  2. Set labels(s). Kubernetes labels are key-value pairs attached to the workload. They are used for categorizing to enable querying.

Completing the workload

  1. Before finalizing your workload, review your configurations and make any necessary adjustments.

  2. Click CREATE INFERENCE

Managing and monitoring

After the workload is created, it is added to the Workloads table, where it can be managed and monitored.

Rolling inference updates

Note

Rolling inference update via the UI is supported only for workloads created using the Flexible submission form.

When deploying models and running inference workloads, you may need to update the workload configuration in real-time without disrupting critical services. Rolling inference updates allow you to submit changes to an existing inference workload, regardless of its current status (running, pending, etc.).

To update an inference workload, select the workload and click UPDATE. Only the settings listed below can be modified.

Supported updates

You can update various aspects of an inference workload, for example:

  • Container image – Deploy a new model version.

  • Configuration parameters – Modify command arguments and/or environment variables.

  • Compute resources – Adjust resources to optimize performance.

  • Replica count and scaling policy – Adapt to changing workload demands.

Throughout the update process, the workload remains operational, ensuring uninterrupted access for consumers (e.g., interacting with an LLM).

Update process

When an inference workload is updated, a new revision of the pod(s) is created based on the updated specification.

  • Multiple updates can be submitted in succession, but only the latest update takes effect—previous updates are ignored.

  • Once the new revision is fully deployed and running, traffic is redirected to it.

  • The original revision is then terminated, and its resources are released back to the shared pool.

GPU quota considerations

To successfully complete an inference workload update, the project must have sufficient free GPU quota. For example:

  • Existing workload - The current inference workload is running with 3 replicas. Assuming each replica uses 1 GPU, the project is currently consuming 3 GPUs from its quota. For clarity, we'll refer to this as Revision 1.

  • Updated workload - The workload is updated to use 8 replicas, which requires 8 additional GPUs during the update process. These GPUs must be available in the project's quota before the update can begin. Once the update is complete and the new revision is running, the 3 GPUs used by Revision 1 are released.

Monitoring updates in the UI

In the UI, the Workloads table displays the configuration of the latest submitted update. For example, if you change the container image, the image column will display the name of updated image.

The status of the workload continues to reflect the operational state of the service the workload exposes. For instance, during an update, the workload status remains "Running" if the service is still being delivered to consumers. Hovering over the workload's status in the grid will display the phase message for the update, offering additional insights into its update state.

Timeout and resource allocation

  • As long as the update process is not completed, GPUs are not allocated to the replicas of the new revision. This prevents the allocation of idle GPUs so others will not be deprived using them. This behavior is supported using the Knative behavior described below.

  • If the update process is not completed within the default time limit of 10 minutes, it will automatically stop. At that point, all replicas of the new revision will be removed, and the original revision will continue to run normally.

  • The above default time limit for updates is configurable. Consider setting a longer duration if your workload requires extended time to pull the image due to its size, if the workload takes additional time to reach a 'READY' state due to a long initialization process, or if your cluster depends on autoscaling to allocate resources for new replicas. For example, to set the time limit to 30 minutes, you can run the following command:

    kubectl patch ConfigMap config-deployment -n knative-serving --type='merge' -p '{"data": {"progress-deadline": "1800s"}}'

Inference workloads with Knative

Starting in version 2.19, all pods of a single Knative revision are grouped under a single pod-group. This means that when a new Knative revision is created:

  • It either succeeds in allocating the minimum number of pods; or

  • It fails and moves into a pending state, to retry again later to allocate all pods with their resources.

The resources (GPUs, CPUs) are not occupied by a new Knative revision until it succeeds in allocating all pods. The older revision pods are then terminated and release their resources (GPUs, CPUs) back to the cluster to be used by other workloads.

Using CLI

To view the available actions, see the inference workload CLI v2 reference.

Using API

  • To view the available actions for creating an inference workload, see the Inferences API reference.

  • To view the available actions for rolling an inference update, see the Update inference spec API reference.

Last updated