# Cluster System Requirements

The NVIDIA Run:ai cluster is a Kubernetes application. This section explains the required hardware and software system requirements for the NVIDIA Run:ai cluster.

The system requirements needed depend on where the control plane and cluster are installed. The following applies to **Kubernetes only**:

* If you are installing the first cluster and control plane on the same Kubernetes cluster, [Kubernetes ingress controller](#kubernetes-ingress-controller) and [Fully Qualified Domain Name](#fully-qualified-domain-name-fqdn) **are not required**.
* If you are installing the first cluster and control plane on separate Kubernetes clusters, the [Kubernetes ingress controller](#kubernetes-ingress-controller) and [Fully Qualified Domain Name](#fully-qualified-domain-name-fqdn) **are required**.

## Hardware Requirements

The following hardware requirements are for the Kubernetes cluster nodes. By default, all NVIDIA Run:ai cluster services run on all available nodes. For production deployments, you may want to set [node roles](https://run-ai-docs.nvidia.com/self-hosted/2.22/infrastructure-setup/advanced-setup/node-roles), to separate between system and worker nodes, reduce downtime and save CPU cycles on expensive GPU Machines.

### Architecture

* **x86** - Supported for both Kubernetes and OpenShift deployments.
* **ARM** - Supported for Kubernetes only. ARM is currently not supported for OpenShift.

### NVIDIA Run:ai Cluster - System Nodes

This configuration is the minimum requirement you need to install and use NVIDIA Run:ai cluster.

| Component  | Required Capacity |
| ---------- | ----------------- |
| CPU        | 10 cores          |
| Memory     | 20GB              |
| Disk space | 50GB              |

{% hint style="info" %}
**Note**

To designate nodes to NVIDIA Run:ai system services, follow the instructions as described in [System nodes](https://run-ai-docs.nvidia.com/self-hosted/2.22/infrastructure-setup/advanced-setup/node-roles#system-nodes).
{% endhint %}

### NVIDIA Run:ai Cluster - Worker Nodes

The NVIDIA Run:ai cluster supports x86 and ARM CPUs, and any NVIDIA GPUs supported by the NVIDIA GPU Operator. The list of supported GPUs depends on the version of the NVIDIA GPU Operator installed in the cluster. NVIDIA Run:ai supports GPU Operator versions 24.9 to 25.3.

For the list of supported GPU models, see [Supported NVIDIA Data Center GPUs and Systems](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/platform-support.html#supported-nvidia-data-center-gpus-and-systems). To install the GPU Operator, see [NVIDIA GPU Operator](#nvidia-gpu-operator).

{% hint style="info" %}
**Note**

NVIDIA DGX Spark and NVIDIA Jetson are not supported.
{% endhint %}

The following configuration represents the minimum hardware requirements for installing and operating the NVIDIA Run:ai cluster on worker nodes. Each node must meet these specifications:

| Component | Required Capacity |
| --------- | ----------------- |
| CPU       | 2 cores           |
| Memory    | 4GB               |

{% hint style="info" %}
**Note**

To designate nodes to NVIDIA Run:ai workloads, follow the instructions as described in [Worker nodes](https://run-ai-docs.nvidia.com/self-hosted/2.22/infrastructure-setup/advanced-setup/node-roles#worker-nodes).
{% endhint %}

### Shared Storage

NVIDIA Run:ai workloads must be able to access data from any worker node in a uniform way, to access training data and code as well as save checkpoints, weights, and other machine-learning-related artifacts.

Typical protocols are Network File Storage (NFS) or Network-attached storage (NAS). NVIDIA Run:ai cluster supports both, for more information see [Shared storage](https://run-ai-docs.nvidia.com/self-hosted/2.22/infrastructure-setup/procedures/shared-storage).

## Software Requirements

The following software requirements must be fulfilled on the Kubernetes cluster.

### Operating System

* Any **Linux** operating system supported by both Kubernetes and NVIDIA GPU Operator
* NVIDIA Run:ai cluster on Google Kubernetes Engine (GKE) supports both Ubuntu and Container Optimized OS (COS). COS is supported only with NVIDIA GPU Operator 24.6 or newer, and NVIDIA Run:ai cluster version 2.19 or newer.
* NVIDIA Run:ai cluster on Elastic Kubernetes Service (EKS) does not support Bottlerocket or Amazon Linux.
* NVIDIA Run:ai cluster on Oracle Kubernetes Engine (OKE) supports only Ubuntu.
* Internal tests are being performed on **Ubuntu 22.04** and **CoreOS** for OpenShift.

### Kubernetes Distribution

#### NVIDIA-Certified Distributions

NVIDIA Run:ai cluster requires Kubernetes. The following Kubernetes distributions are supported:

* Vanilla Kubernetes
* OpenShift Container Platform (OCP)
* NVIDIA Base Command Manager (BCM)
* Elastic Kubernetes Engine (EKS)
* Google Kubernetes Engine (GKE)
* Azure Kubernetes Service (AKS)
* Oracle Kubernetes Engine (OKE)
* Rancher Kubernetes Engine (RKE1)
* Rancher Kubernetes Engine 2 (RKE2)

{% hint style="info" %}
**Note**

* The latest release of the NVIDIA Run:ai cluster supports **Kubernetes 1.31 to 1.33** and **OpenShift 4.15 to 4.19**.
* For [Multi-Node NVLink](https://run-ai-docs.nvidia.com/self-hosted/2.22/platform-management/aiinitiatives/resources/using-gb200) support (e.g. GB200), Kubernetes 1.32 and above is required.
  {% endhint %}

For existing Kubernetes clusters, see the following Kubernetes version support matrix for the latest NVIDIA Run:ai cluster releases:

| NVIDIA Run:ai version | Supported Kubernetes versions | Supported OpenShift versions |
| --------------------- | ----------------------------- | ---------------------------- |
| v2.17                 | 1.27 to 1.29                  | 4.12 to 4.15                 |
| v2.18                 | 1.28 to 1.30                  | 4.12 to 4.16                 |
| v2.19                 | 1.28 to 1.31                  | 4.12 to 4.17                 |
| v2.20                 | 1.29 to 1.32                  | 4.14 to 4.17                 |
| v2.21                 | 1.30 to 1.32                  | 4.14 to 4.18                 |
| v2.22 (latest)        | 1.31 to 1.33                  | 4.15 to 4.19                 |

For information on supported versions of managed Kubernetes, it's important to consult the release notes provided by your Kubernetes service provider. There, you can confirm the specific version of the underlying Kubernetes platform supported by the provider, ensuring compatibility with NVIDIA Run:ai. For an up-to-date end-of-life statement see [Kubernetes Release History](https://kubernetes.io/releases/) or [OpenShift Container Platform Life Cycle Policy](https://access.redhat.com/support/policy/updates/openshift).

#### Partner-Certified Distributions

The following Kubernetes distributions are **partner-certified**. They are tested and validated by the partner, who is responsible for maintaining compatibility with NVIDIA Run:ai:

* VMware vSphere Kubernetes Service (VKS)
* Crusoe Managed Kubernetes (CMK)
* Mirantis k0rdent

See the following Kubernetes version support matrix for the NVIDIA Run:ai cluster releases:

| Kubernetes distribution                                              | NVIDIA Run:ai version | Supported Kubernetes versions |
| -------------------------------------------------------------------- | --------------------- | ----------------------------- |
| VMware vSphere Kubernetes Service (VKS)                              | v2.22                 | 1.33                          |
| Crusoe Managed Kubernetes (CMK)                                      | v2.22                 | 1.33                          |
| [Mirantis k0rdent](https://catalog.k0rdent.io/v1.7.0/apps/runai-cp/) | v2.22                 | 1.32-1.33                     |

### Container Runtime

NVIDIA Run:ai supports the following [container runtimes](https://kubernetes.io/docs/setup/production-environment/container-runtimes/). Make sure your Kubernetes cluster is configured with one of these runtimes:

* [Containerd](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd) (default in Kubernetes)
* [CRI-O](https://cri-o.io/) (default in OpenShift)

### Kubernetes Pod Security Admission

NVIDIA Run:ai supports `restricted` policy for [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) (PSA) on OpenShift only. Other Kubernetes distributions are only supported with `privileged` policy.

For NVIDIA Run:ai on OpenShift to run with PSA `restricted` policy:

* Label the `runai` namespace as described in [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) with the following labels:

```
pod-security.kubernetes.io/audit=privileged
pod-security.kubernetes.io/enforce=privileged
pod-security.kubernetes.io/warn=privileged
```

* The workloads submitted through NVIDIA Run:ai should comply with the restrictions of PSA restricted policy. This can be enforced using Policies.

### NVIDIA Run:ai Namespace

The NVIDIA Run:ai must be installed in a namespace or project (OpenShift) called `runai`. Use the following to create the namespace/project:

{% tabs %}
{% tab title="Kubernetes" %}

```bash
kubectl create ns runai
```

{% endtab %}

{% tab title="OpenShift" %}

```bash
oc new-project runai
```

{% endtab %}
{% endtabs %}

### Kubernetes Ingress Controller

NVIDIA Run:ai cluster requires [Kubernetes Ingress Controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) to be installed on the Kubernetes cluster.

* OpenShift, RKE and RKE2 come pre-installed ingress controller.
* Internal tests are being performed on NGINX, Rancher NGINX, OpenShift Router, and Istio.
* Make sure that a default ingress controller is set.

There are many ways to install and configure different ingress controllers. A simple example to install and configure NGINX ingress controller using [helm](https://helm.sh/):

<details>

<summary>Vanilla Kubernetes</summary>

Run the following commands:

* For cloud deployments, both the **internal IP** and **external IP** are required.
* For on-prem deployments, only the **external IP** is needed.

```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade -i nginx-ingress ingress-nginx/ingress-nginx \
    --namespace nginx-ingress --create-namespace \
    --set controller.kind=DaemonSet \
    --set controller.service.externalIPs="{<INTERNAL-IP>,<EXTERNAL-IP>}" # Replace <INTERNAL-IP> and <EXTERNAL-IP> with the internal and external IP addresses of one of the nodes
```

</details>

<details>

<summary>Managed Kubernetes (EKS, GKE, AKS)</summary>

Run the following commands:

```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
    --namespace nginx-ingress --create-namespace
```

</details>

<details>

<summary>Oracle Kubernetes Engine (OKE)</summary>

Run the following commands:

```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
    --namespace ingress-nginx --create-namespace \
    --set controller.service.annotations.oci.oraclecloud.com/load-balancer-type=nlb \
    --set controller.service.annotations.oci-network-load-balancer.oraclecloud.com/is-preserve-source=True \
    --set controller.service.annotations.oci-network-load-balancer.oraclecloud.com/security-list-management-mode=None \
    --set controller.service.externalTrafficPolicy=Local \
    --set controller.service.annotations.oci-network-load-balancer.oraclecloud.com/subnet=<SUBNET-ID> # Replace <SUBNET-ID> with the subnet ID of one of your cluster
```

</details>

### Fully Qualified Domain Name (FQDN)

{% hint style="info" %}
**Note**

Fully Qualified Domain Name applies to Kubernetes only.
{% endhint %}

You must have a Fully Qualified Domain Name (FQDN) to install the NVIDIA Run:ai cluster (ex: `runai.mycorp.local`). This cannot be an IP. The domain name must be accessible inside the organization's private network.

#### Wildcard FQDN for Inference <a href="#wildcard-fqdn-for-inference" id="wildcard-fqdn-for-inference"></a>

In order to make inference serving endpoints available externally to the cluster, configure a wildcard DNS record (`*.runai-inference.mycorp.local`) that resolves to the cluster’s public IP address, or to the cluster's load balancer IP address in on-prem environments. This ensures each inference workload receives a unique subdomain under the wildcard domain.

### TLS Certificate

* **Kubernetes** - You must have a TLS certificate that is associated with the FQDN for HTTPS access. Create a [Kubernetes Secret](https://kubernetes.io/docs/concepts/configuration/secret/) named `runai-cluster-domain-tls-secret` in the `runai` namespace and include the path to the TLS `--cert` and its corresponding private `--key` by running the following:

  ```bash
  kubectl create secret tls runai-cluster-domain-tls-secret -n runai \    
    --cert /path/to/fullchain.pem  \ # Replace /path/to/fullchain.pem with the actual path to your TLS certificate    
    --key /path/to/private.pem # Replace /path/to/private.pem with the actual path to your private key
  ```
* **OpenShift** - NVIDIA Run:ai uses the OpenShift default Ingress router for serving. The TLS certificate configured for this router must be issued by a trusted CA. For more details, see the OpenShift documentation on [configuring certificates](https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/configuring-certificates#replacing-default-ingress).

#### Wildcard TLS Certificate - Inference

* **Kubernetes** - For serving inference endpoints over HTTPS, NVIDIA Run:ai requires a dedicated wildcard TLS certificate that matches the fully qualified domain name (FQDN) used for inference. This certificate ensures secure external access to inference workloads:

  ```bash
  kubectl create secret tls runai-cluster-inference-tls-secret -n knative-serving \
      --cert /path/to/fullchain.pem  \ # Replace /path/to/fullchain.pem with the actual path to your TLS certificate
      --key /path/to/private.pem # Replace /path/to/private.pem with the actual path to your private key
  ```
* **OpenShift** - A wildcard TLS certificate for inference is not required. OpenShift Routes handle TLS termination for inference endpoints using the platform’s built-in routing and certificate management.

### Local Certificate Authority

A local certificate authority serves as the root certificate for organizations that cannot use **publicly trusted certificate authority**. Follow the below steps to configure the local certificate authority.

In air-gapped environments, you **must** configure and install the local CA's public key in the Kubernetes cluster. This is required for the installation to succeed:

1. Add the public key to the required namespace:

{% tabs %}
{% tab title="Kubernetes" %}

```bash
kubectl -n runai create secret generic runai-ca-cert \
    --from-file=runai-ca.pem=<ca_bundle_path>
kubectl label secret runai-ca-cert -n runai run.ai/cluster-wide=true run.ai/name=runai-ca-cert --overwrite
```

{% endtab %}

{% tab title="OpenShift" %}

```bash
oc -n runai create secret generic runai-ca-cert \
    --from-file=runai-ca.pem=<ca_bundle_path>
oc -n openshift-monitoring create secret generic runai-ca-cert \
    --from-file=runai-ca.pem=<ca_bundle_path>
oc label secret runai-ca-cert -n runai run.ai/cluster-wide=true run.ai/name=runai-ca-cert --overwrite
```

{% endtab %}
{% endtabs %}

2. When installing the cluster, make sure the following flag is added to the helm command `--set global.customCA.enabled=true`. See [Install cluster](https://run-ai-docs.nvidia.com/self-hosted/2.22/getting-started/installation/install-using-helm/helm-install).

{% hint style="info" %}
**Note**

When using a custom CA, sidecar containers used for S3 or Git integrations do not automatically inherit the CA configured at the cluster level. See [Git and S3 sidecar containers](https://run-ai-docs.nvidia.com/self-hosted/2.22/infrastructure-setup/advanced-setup/git-and-s3-sidecar-containers) for more details.
{% endhint %}

### NVIDIA GPU Operator

NVIDIA Run:ai cluster requires NVIDIA GPU Operator to be installed on the Kubernetes cluster. GPU Operator versions 24.9 to 25.3 are supported.

{% hint style="info" %}
**Note**

For [Multi-Node NVLink](https://run-ai-docs.nvidia.com/self-hosted/2.22/platform-management/aiinitiatives/resources/using-gb200) support (e.g. GB200), GPU Operator 25.3 and above is required.
{% endhint %}

For air-gapped installation, follow the instructions in [Install NVIDIA GPU Operator in Air-Gapped Environments](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/install-gpu-operator-air-gapped.html).

See [Installing the NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/getting-started.html), followed by notes below:

* Use the default `gpu-operator` namespace . Otherwise, you must specify the target namespace using the flag `runai-operator.config.nvidiaDcgmExporter.namespace` as described in customized cluster installation.
* NVIDIA drivers may already be installed on the nodes. In such cases, use the NVIDIA GPU Operator flags `--set driver.enabled=false`. [DGX OS](https://docs.nvidia.com/dgx/dgx-os-6-user-guide/release_notes.html) is one such example as it comes bundled with NVIDIA Drivers.
* For distribution-specific additional instructions see below:

<details>

<summary>OpenShift Container Platform (OCP)</summary>

The Node Feature Discovery (NFD) Operator is a prerequisite for the NVIDIA GPU Operator in OpenShift. Install the NFD Operator using the Red Hat OperatorHub catalog in the OpenShift Container Platform web console. For more information, see [Installing the Node Feature Discovery (NFD) Operator](https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/install-nfd.html).

</details>

<details>

<summary>Elastic Kubernetes Service (EKS)</summary>

* When setting-up the cluster, do **not** install the NVIDIA device plug-in (we want the NVIDIA GPU Operator to install it instead).
* When using the [eksctl](https://eksctl.io/) tool to create a cluster, use the flag `--install-nvidia-plugin=false` to disable the installation.

For GPU nodes, EKS uses an AMI which already contains the NVIDIA drivers. As such, you must use the GPU Operator flags: `--set driver.enabled=false.`

</details>

<details>

<summary>Google Kubernetes Engine (GKE)</summary>

Before installing the GPU Operator:

1. Create the `gpu-operator` namespace by running:

```bash
kubectl create ns gpu-operator
```

2. Create the following file:

<pre class="language-yaml"><code class="lang-yaml">#resourcequota.yaml

<strong>apiVersion: v1
</strong>kind: ResourceQuota
metadata:
name: gcp-critical-pods
namespace: gpu-operator
spec:
scopeSelector:
    matchExpressions:
    - operator: In
    scopeName: PriorityClass
    values:
    - system-node-critical
    - system-cluster-critical
</code></pre>

3. Run:

<pre class="language-bash"><code class="lang-bash"><strong>kubectl apply -f resourcequota.yaml
</strong></code></pre>

</details>

<details>

<summary>Rancher Kubernetes Engine 2 (RKE2)</summary>

Before installing the GPU Operator, verify the [host OS requirements](https://docs.rke2.io/add-ons/gpu_operators?GPUoperator=v25.3.x#host-os-requirements) are met. Then, install the [operator](https://docs.rke2.io/add-ons/gpu_operators#operator-installation).

When installing GPU Operator v25.3, update the Helm values file as follows:

```yaml
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: gpu-operator
  namespace: kube-system
spec:
  repo: https://helm.ngc.nvidia.com/nvidia
  chart: gpu-operator
  version: v25.3.4
  targetNamespace: gpu-operator
  createNamespace: true
  valuesContent: |-
    toolkit:
      env:
      - name: CONTAINERD_SOCKET
        value: /run/k3s/containerd/containerd.sock
```

</details>

<details>

<summary>Oracle Kubernetes Engine (OKE)</summary>

* During cluster setup, [create a nodepool](https://docs.oracle.com/en-us/iaas/tools/python/latest/api/container_engine/models/oci.container_engine.models.NodePool.html#oci.container_engine.models.NodePool.initial_node_labels), and set `initial_node_labels` to include `oci.oraclecloud.com/disable-gpu-device-plugin=true` which disables the NVIDIA GPU device plugin.
* For GPU nodes, OKE defaults to Oracle Linux, which is incompatible with NVIDIA drivers. To resolve this, use a custom Ubuntu image instead.

</details>

For troubleshooting information, see the [NVIDIA GPU Operator Troubleshooting Guide](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/troubleshooting.html).

### NVIDIA Network Operator

When deploying on clusters with RDMA or Multi Node NVLink‑capable nodes (e.g. B200, GB200), the NVIDIA Network Operator is required to enable high-performance networking features such as GPUDirect RDMA in Kubernetes. Network Operator versions v24.4 and above are supported.

The Network Operator works alongside the NVIDIA GPU Operator to provide:

* NVIDIA networking drivers for advanced network capabilities.
* Kubernetes device plugins to expose high‑speed network hardware to workloads.
* Secondary network components to support network‑intensive applications.

The Network Operator must be installed and configured as follows:

1. Install the network operator as detailed in [Network Operator Deployment on Vanilla Kubernetes Cluster](https://docs.nvidia.com/networking/display/kubernetes2440/getting-started-kubernetes.html#network-operator-deployment-on-vanilla-kubernetes-cluster).
2. Configure SR-IOV InfiniBand support as detailed in [Network Operator Deployment with an SR-IOV InfiniBand Network](https://docs.nvidia.com/networking/display/kubernetes2440/getting-started-kubernetes.html#network-operator-deployment-with-an-sr-iov-infiniband-network).

For air-gapped installation, follow the instructions in [Network Operator Deployment in an Air-gapped Environment](https://docs.nvidia.com/networking/display/kubernetes2540/advanced/proxy-airgapped.html#network-operator-deployment-in-an-air-gapped-environment).

### NVIDIA Dynamic Resource Allocation (DRA) Driver

When deploying on clusters with Multi-Node NVLink (e.g. GB200), the NVIDIA DRA driver is essential to enable Dynamic Resource Allocation at the Kubernetes level. To install, follow the instructions in [Configure and Helm-install the driver](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/dra-intro-install.html#configure-and-helm-install-the-driver).

After installation, update `runaiconfig` using the `GPUNetworkAccelerationEnabled=True` flag to enable GPU network acceleration. This triggers an update of the NVIDIA Run:ai workload-controller deployment and restarts the controller. See [Advanced cluster configurations](https://run-ai-docs.nvidia.com/self-hosted/2.22/infrastructure-setup/advanced-setup/cluster-config) for more details.

{% hint style="info" %}
**Note**

For air-gapped installation, contact [NVIDIA Run:ai support](https://www.nvidia.com/en-eu/support/enterprise/#contact-us).
{% endhint %}

### Prometheus

{% hint style="info" %}
**Note**

Installing Prometheus applies to Kubernetes only.
{% endhint %}

NVIDIA Run:ai cluster requires Prometheus to be installed on the Kubernetes cluster.

* OpenShift comes pre-installed with Prometheus
* For RKE2 see [Enable Monitoring](https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring) instructions to install Prometheus

There are many ways to install Prometheus. A simple example to install the community [Kube-Prometheus Stack](https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack) using [helm](https://helm.sh/), run the following commands:

```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack \
    -n monitoring --create-namespace --set grafana.enabled=false
```

## Additional Software Requirements

Additional NVIDIA Run:ai capabilities, Distributed Training and Inference require additional Kubernetes applications (frameworks) to be installed on the cluster.

### Distributed Training

Distributed training enables training of AI models over multiple nodes. This requires installing a distributed training framework on the cluster. The following frameworks are supported:

* [TensorFlow](https://www.tensorflow.org/)
* [PyTorch](https://pytorch.org/)
* [XGBoost](https://xgboost.readthedocs.io/)
* [MPI v2](https://docs.open-mpi.org/)
* [JAX](https://docs.jax.dev/en/latest/index.html)

There are several ways to install each framework. A simple method of installation example is the [Kubeflow Training Operator](https://www.kubeflow.org/docs/components/training/installation/) which includes TensorFlow, PyTorch, XGBoost and JAX.

It is recommended to use **Kubeflow Training Operator v1.9.2**, and **MPI Operator v0.6.0 or later** for compatibility with advanced workload capabilities, such as [Stopping a workload](https://run-ai-docs.nvidia.com/self-hosted/2.22/workloads-in-nvidia-run-ai/workloads) and [Scheduling rules](https://run-ai-docs.nvidia.com/self-hosted/2.22/platform-management/policies/scheduling-rules).

* To install the Kubeflow Training Operator for TensorFlow, PyTorch, XGBoost and JAX frameworks, run the following command:

```bash
kubectl apply --server-side -k "github.com/kubeflow/training-operator.git/manifests/overlays/standalone?ref=v1.9.2"
```

* To install the MPI Operator for MPI v2, run the following command:

```bash
kubectl apply --server-side -f https://raw.githubusercontent.com/kubeflow/mpi-operator/v0.6.0/deploy/v2beta1/mpi-operator.yaml
```

{% hint style="info" %}
**Note**

If you require both the MPI Operator and Kubeflow Training Operator, follow the steps below:

* Install the Kubeflow Training Operator as described above.
* Disable and delete MPI v1 in the Kubeflow Training Operator by running:

```bash
kubectl patch deployment training-operator -n kubeflow --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args", "value": ["--enable-scheme=tfjob", "--enable-scheme=pytorchjob", "--enable-scheme=xgboostjob", "--enable-scheme=jaxjob"]}]'
kubectl delete crd mpijobs.kubeflow.org
```

* Install the MPI Operator as described above.
  {% endhint %}

### Inference

Inference enables serving of AI models. This requires the [Knative Serving](https://knative.dev/docs/serving/) framework to be installed on the cluster and supports Knative versions 1.11 to 1.18.

{% tabs %}
{% tab title="Kubernetes" %}
Follow the [Installing Knative](https://knative.dev/v1.18-docs/install/operator/knative-with-operators/) instructions or run:

```bash
helm repo add knative-operator https://knative.github.io/operator
helm install knative-operator --create-namespace --namespace knativeoperator --version 1.18.2 knative-operator/knative-operator
```

Once installed, follow the below steps:

1. Create the `knative-serving` namespace:

   ```bash
   kubectl create ns knative-serving
   ```
2. Create a YAML file named `knative-serving.yaml` and replace the placeholder FQDN with your wildcard inference domain (for example, `runai-inference.mycorp.local`):

   ```yaml
   apiVersion: operator.knative.dev/v1beta1
   kind: KnativeServing
   metadata:
     name: knative-serving
     namespace: knative-serving
   spec:
     config:
       config-autoscaler:
         enable-scale-to-zero: "true"
       config-features:
         kubernetes.podspec-affinity: enabled
         kubernetes.podspec-init-containers: enabled
         kubernetes.podspec-persistent-volume-claim: enabled
         kubernetes.podspec-persistent-volume-write: enabled
         kubernetes.podspec-schedulername: enabled
         kubernetes.podspec-securitycontext: enabled
         kubernetes.podspec-tolerations: enabled
         kubernetes.podspec-volumes-emptydir: enabled
         kubernetes.podspec-fieldref: enabled
         kubernetes.containerspec-addcapabilities: enabled
         kubernetes.podspec-nodeselector: enabled
         multi-container: enabled
       domain:
         runai-inference.mycorp.local: "" # replace with the wildcard FQDN for Inference
       network:
         domainTemplate: '{{.Name}}-{{.Namespace}}.{{.Domain}}'
         ingress-class: kourier.ingress.networking.knative.dev
         default-external-scheme: https
     high-availability:
       replicas: 2
     ingress:
       kourier:
         enabled: true
   ```
3. Apply the changes:

   ```bash
   kubectl apply -f knative-serving.yaml
   ```
4. Configure NGINX to proxy requests to Kourier / Knative and handle TLS termination using the wildcard certificate. Create a YAML file named `knative-ingress.yaml` and replace the FQDN placeholders with your wildcard inference domain:

   ```yaml
   apiVersion: networking.k8s.io/v1
   kind: Ingress
   metadata:
     name: knative-serving
     namespace: knative-serving
   spec:
     ingressClassName: nginx
     rules:
     - host: '*.runai-inference.mycorp.local' # replace with the wildcard FQDN for Inference
       http:
         paths:
         - backend:
             service:
               name: kourier
               port:
                 number: 80
           path: /
           pathType: Prefix
     tls:
     - hosts:
       - '*.runai-inference.mycorp.local' # replace with the wildcard FQDN for Inference
       secretName: runai-cluster-inference-tls-secret
   ```
5. Apply the changes:

   ```bash
   kubectl apply -f knative-ingress.yaml
   ```

{% endtab %}

{% tab title="OpenShift" %}
Follow the [Installing the OpenShift Serverless Operator](https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.37/html/installing_openshift_serverless/install-serverless-operator) instructions. Once installed, follow the steps below:

1. Create the `knative-serving` project:

   ```bash
   oc new-project knative-serving
   ```
2. Create a YAML file named `knative-serving.yaml`:

   ```yaml
   apiVersion: operator.knative.dev/v1beta1
   kind: KnativeServing
   metadata:
     finalizers:
       - knative-serving-openshift
       - knativeservings.operator.knative.dev
     name: knative-serving
     namespace: knative-serving
   spec:
     config:
       config-features:
         kubernetes.podspec-tolerations: enabled
         kubernetes.podspec-volumes-emptydir: enabled
         kubernetes.podspec-persistent-volume-claim: enabled
         multi-container: enabled
         kubernetes.podspec-persistent-volume-write: enabled
         kubernetes.podspec-fieldref: enabled
         kubernetes.podspec-schedulername: enabled
         kubernetes.podspec-nodeselector: enabled
         kubernetes.podspec-init-containers: enabled
         kubernetes.podspec-securitycontext: enabled
         kubernetes.podspec-affinity: enabled
         kubernetes.containerspec-addcapabilities: enabled
     controller-custom-certs:
       name: ''
       type: ''
     registry: {}
   ```
3. Apply the changes:

   ```bash
   oc apply -f knative-serving.yaml
   ```

{% endtab %}
{% endtabs %}

### Autoscaling

NVIDIA Run:ai allows for autoscaling a deployment according to the below metrics:

* Latency (milliseconds)
* Throughput (requests/sec)
* Concurrency (requests)

Using a custom metric (for example, Latency) requires installing the [Kubernetes Horizontal Pod Autoscaler (HPA)](https://knative.dev/docs/install/yaml-install/serving/install-serving-with-yaml/#install-optional-serving-extensions). Use the following command to install. Make sure to update the {VERSION} in the below command with a [supported Knative version](#inference).

```bash
kubectl apply -f https://github.com/knative/serving/releases/download/knative-{VERSION}/serving-hpa.yaml
```

### Distributed Inference

NVIDIA Run:ai supports distributed inference (multi-node) deployments using the Leader Worker Set (LWS). To enable this capability, you must install the [LWS Helm chart](https://lws.sigs.k8s.io/docs/installation/#install-by-helm) in version 0.7.0 or higher on your cluster:

```bash
CHART_VERSION=0.7.0
helm install lws oci://registry.k8s.io/lws/charts/lws \
  --version=$CHART_VERSION \
  --namespace lws-system \
  --create-namespace \
  --wait --timeout 300s
```
