Cluster System Requirements

After provisioning the tenant’s Kubernetes cluster, the next step is to install the required system components to support the NVIDIA Run:ai platform.

The NVIDIA Run:ai cluster is deployed as a Kubernetes application. This section outlines the minimum hardware and software requirements that must be installed and configured on each tenant cluster before deploying the NVIDIA Run:ai components.

Hardware Requirements

The following hardware requirements are for the Kubernetes cluster nodes. By default, all NVIDIA Run:ai cluster services run on all available nodes. For production deployments, you may want to set node roles, to separate between system and worker nodes, reduce downtime and save CPU cycles on expensive GPU Machines.

Architecture

  • x86 - Supported for Kubernetes and OpenShift.

  • ARM - Supported for Kubernetes and OpenShift.

NVIDIA Run:ai Cluster - System Nodes

This configuration is the minimum requirement you need to install and use NVIDIA Run:ai cluster.

Component
Required Capacity

CPU

10 cores

Memory

20GB

Disk space

50GB

circle-info

Note

To designate nodes to NVIDIA Run:ai system services, follow the instructions as described in System nodes.

NVIDIA Run:ai Cluster - Worker Nodes

The NVIDIA Run:ai cluster supports x86 and ARM CPUs, and any NVIDIA GPUs supported by the NVIDIA GPU Operator. The list of supported GPUs depends on the version of the NVIDIA GPU Operator installed in the cluster. NVIDIA Run:ai supports GPU Operator versions 25.3 to 25.10.

For the list of supported GPUs, see Supported NVIDIA Data Center GPUs and Systemsarrow-up-right. To install the GPU Operator, see NVIDIA GPU Operator.

circle-info

Note

NVIDIA DGX Spark and NVIDIA Jetson are not supported.

The following configuration represents the minimum hardware requirements for installing and operating the NVIDIA Run:ai cluster on worker nodes. Each node must meet these specifications:

Component
Required Capacity

CPU

2 cores

Memory

4GB

circle-info

Note

To designate nodes to NVIDIA Run:ai workloads, follow the instructions as described in Worker nodes.

Software Requirements

The following software requirements must be fulfilled on the Kubernetes cluster.

Operating System

  • Any Linux operating system supported by both Kubernetes and NVIDIA GPU Operator

  • NVIDIA Run:ai cluster on Google Kubernetes Engine (GKE) supports both Ubuntu and Container Optimized OS (COS). COS is supported only with NVIDIA GPU Operator 24.6 or newer, and NVIDIA Run:ai cluster version 2.19 or newer.

  • NVIDIA Run:ai cluster on Elastic Kubernetes Service (EKS) does not support Bottlerocket or Amazon Linux.

  • NVIDIA Run:ai cluster on Oracle Kubernetes Engine (OKE) supports only Ubuntu.

  • Internal tests are being performed on Ubuntu 22.04 and CoreOS for OpenShift.

Kubernetes Distribution

NVIDIA Run:ai cluster requires Kubernetes. The following Kubernetes distributions are supported:

  • Vanilla Kubernetes

  • OpenShift Container Platform (OCP)

  • NVIDIA Base Command Manager (BCM)

  • Rancher Kubernetes Engine 2 (RKE2)

circle-info

Note

  • The latest release of the NVIDIA Run:ai cluster supports Kubernetes 1.33 to 1.35 and OpenShift 4.17 to 4.12.

  • For Multi-Node NVLink support (e.g. GB200), Kubernetes 1.32 and above is required.

For existing Kubernetes clusters, see the following Kubernetes version support matrix for the latest NVIDIA Run:ai cluster releases:

NVIDIA Run:ai version
Supported Kubernetes versions
Supported OpenShift versions

v2.22

1.31 to 1.33

4.15 to 4.19

v2.23

1.31 to 1.34

4.16 to 4.19

v2.24 (latest)

1.33 to 1.35

4.17 to 4.20

For information on supported versions of managed Kubernetes, it's important to consult the release notes provided by your Kubernetes service provider. There, you can confirm the specific version of the underlying Kubernetes platform supported by the provider, ensuring compatibility with NVIDIA Run:ai. For an up-to-date end-of-life statement see Kubernetes Release Historyarrow-up-right or OpenShift Container Platform Life Cycle Policyarrow-up-right.

Container Runtime

NVIDIA Run:ai supports the following container runtimesarrow-up-right. Make sure your Kubernetes cluster is configured with one of these runtimes:

Kubernetes Pod Security Admission

NVIDIA Run:ai supports restricted policy for Pod Security Admissionarrow-up-right (PSA) on OpenShift only. Other Kubernetes distributions are only supported with privileged policy.

For NVIDIA Run:ai on OpenShift to run with PSA restricted policy:

  • The workloads submitted through NVIDIA Run:ai should comply with the restrictions of PSA restricted policy. This can be enforced using Policies.

NVIDIA Run:ai Namespace

The NVIDIA Run:ai must be installed in a namespace or project (OpenShift) called runai. Use the following to create the namespace/project:

Kubernetes Ingress Controller

NVIDIA Run:ai cluster requires Kubernetes Ingress Controllerarrow-up-right to be installed on the Kubernetes cluster.

  • Make sure that a default ingress controller is set.

  • OpenShift and RKE2 come with a pre-installed ingress controller.

There are many ways to install and configure different ingress controllers. The following provides a simple example to install and configure HAProxyarrow-up-right ingress controller using helmarrow-up-right:

Fully Qualified Domain Name (FQDN)

You must have a Fully Qualified Domain Name (FQDN) to install the NVIDIA Run:ai cluster (ex: runai.mycorp.local). This cannot be an IP. The domain name must be accessible inside the organization's private network.

Wildcard FQDN for Inference

In order to make inference serving endpoints available externally to the cluster, configure a wildcard DNS record (*.runai-inference.mycorp.local) that resolves to the cluster’s public IP address, or to the cluster's load balancer IP address in on-prem environments. This ensures each inference workload receives a unique subdomain under the wildcard domain.

TLS Certificates

  • Kubernetes - You must have a TLS certificate that is associated with the FQDN for HTTPS access. Create a Kubernetes Secretarrow-up-right named runai-cluster-domain-tls-secret in the runai namespace and include the path to the TLS --cert and its corresponding private --key by running the following:

  • OpenShift - NVIDIA Run:ai uses the OpenShift default Ingress router for serving. The TLS certificate configured for this router must be issued by a trusted CA. For more details, see the OpenShift documentation on configuring certificatesarrow-up-right.

Wildcard TLS Certificate - Inference

  • Kubernetes - For serving inference endpoints over HTTPS, NVIDIA Run:ai requires a dedicated wildcard TLS certificate that matches the fully qualified domain name (FQDN) used for inference. This certificate ensures secure external access to inference workloads:

  • OpenShift - A wildcard TLS certificate for inference is not required. OpenShift Routes handle TLS termination for inference endpoints using the platform’s built-in routing and certificate management.

NVIDIA GPU Operator

NVIDIA Run:ai Cluster requires NVIDIA GPU Operator to be installed on the Kubernetes Cluster, supports version 25.3 to 25.10.

circle-info

Note

For Multi-Node NVLink support (e.g. GB200), GPU Operator 25.3 and above is required.

See the Installing the NVIDIA GPU Operatorarrow-up-right, followed by notes below:

  • Use the default gpu-operator namespace . Otherwise, you must specify the target namespace using the flag runai-operator.config.nvidiaDcgmExporter.namespace as described in customized cluster installation.

  • NVIDIA drivers may already be installed on the nodes. In such cases, use the NVIDIA GPU Operator flags --set driver.enabled=false. DGX OSarrow-up-right is one such example as it comes bundled with NVIDIA Drivers.

  • For distribution-specific additional instructions see below:

chevron-rightOpenShift Container Platform (OCP)hashtag

The Node Feature Discovery (NFD) Operator is a prerequisite for the NVIDIA GPU Operator in OpenShift. Install the NFD Operator using the Red Hat OperatorHub catalog in the OpenShift Container Platform web console. For more information, see Installing the Node Feature Discovery (NFD) Operatorarrow-up-right.

chevron-rightRancher Kubernetes Engine 2 (RKE2)hashtag

Before installing the GPU Operator, verify the host OS requirementsarrow-up-right are met. Then, install the operatorarrow-up-right.

When installing GPU Operator v25.3, update the Helm values file as follows:

For troubleshooting information, see the NVIDIA GPU Operator Troubleshooting Guidearrow-up-right.

NVIDIA Network Operator

When deploying on clusters with RDMA or Multi Node NVLink‑capable nodes (e.g. B200, GB200), the NVIDIA Network Operator is required to enable high-performance networking features such as GPUDirect RDMA in Kubernetes. Network Operator versions v24.4 and above are supported.

The Network Operator works alongside the NVIDIA GPU Operator to provide:

  • NVIDIA networking drivers for advanced network capabilities.

  • Kubernetes device plugins to expose high‑speed network hardware to workloads.

  • Secondary network components to support network‑intensive applications.

The Network Operator must be installed and configured as follows:

  1. Configure SR-IOV InfiniBand support as detailed in Network Operator Deployment with an SR-IOV InfiniBand Networkarrow-up-right.

NVIDIA Dynamic Resource Allocation (DRA) Driver

When deploying on clusters with Multi-Node NVLink (e.g. GB200), the NVIDIA DRA driver is essential to enable Dynamic Resource Allocation at the Kubernetes level. To install, follow the instructions in Configure and Helm-install the driverarrow-up-right. DRA driver versions 25.3 to 25.8 are supported.

After the DRA driver is installed, update runaiconfig using the GPUNetworkAccelerationEnabled=True flag to enable GPU network acceleration. This triggers an update of the NVIDIA Run:ai workload-controller deployment and restarts the controller. See Advanced cluster configurations for more details.

Prometheus

circle-info

Note

Installing Prometheus applies for Kubernetes only.

NVIDIA Run:ai cluster requires Prometheus to be installed on the Kubernetes cluster.

There are many ways to install Prometheus. A simple example to install the community Kube-Prometheus Stackarrow-up-right using helmarrow-up-right, run the following commands:

Additional Software Requirements

Additional NVIDIA Run:ai capabilities, Distributed Training and Inference require additional Kubernetes applications (frameworks) to be installed on the cluster.

Distributed Training

Distributed training enables training of AI models over multiple nodes. This requires installing a distributed training framework on the cluster. The following frameworks are supported:

There are several ways to install each framework. A simple method of installation example is the Kubeflow Training Operatorarrow-up-right which includes TensorFlow, PyTorch, XGBoost and JAX.

It is recommended to use Kubeflow Training Operator v1.9.2, and MPI Operator v0.6.0 or later for compatibility with advanced workload capabilities, such as Stopping a workload and Scheduling rules.

  • To install the Kubeflow Training Operator for TensorFlow, PyTorch, XGBoost and JAX frameworks, run the following command:

  • To install the MPI Operator for MPI v2, run the following command:

circle-info

Note

If you require both the MPI Operator and Kubeflow Training Operator, follow the steps below:

  • Install the Kubeflow Training Operator as described above.

  • Disable and delete MPI v1 in the Kubeflow Training Operator by running:

  • Install the MPI Operator as described above.

Inference

Inference enables serving of AI models. This requires the Knative Servingarrow-up-right framework to be installed on the cluster and supports Knative versions 1.11 to 1.18.

Follow the Installing Knativearrow-up-right instructions or run:

Once installed, follow the below steps:

  1. Create the knative-serving namespace:

  2. Create a YAML file named knative-serving.yaml and replace the placeholder FQDN with your wildcard inference domain (for example, runai-inference.mycorp.local):

  3. Apply the changes:

  4. Configure HAProxy to proxy requests to Kourier / Knative and handle TLS termination using the wildcard certificate. Create a YAML file named knative-ingress.yaml and replace the FQDN placeholders with your wildcard inference domain:

  5. Apply the changes:

Autoscaling

NVIDIA Run:ai allows for autoscaling a deployment according to the below metrics:

  • Latency (milliseconds)

  • Throughput (requests/sec)

  • Concurrency (requests)

Using a custom metric (for example, Latency) requires installing the Kubernetes Horizontal Pod Autoscaler (HPA)arrow-up-right. Use the following command to install. Make sure to update the {VERSION} in the below command with a supported Knative version.

Distributed Inference

NVIDIA Run:ai supports distributed inference (multi-node) deployments using the Leader Worker Set (LWS). To enable this capability, you must install the LWS Helm chartarrow-up-right on your cluster:

Last updated