Upgrade
Before Upgrade
Before proceeding with the upgrade, it's crucial to apply the specific prerequisites associated with your current version of NVIDIA Run:ai and every version in between up to the version you are upgrading to.
To ensure a smooth and supported upgrade process:
Align control plane and cluster versions - For best results, upgrade the control plane and cluster components to the same NVIDIA Run:ai version during the same maintenance window. Keeping versions aligned helps avoid unexpected behavior caused by version mismatches and ensures full compatibility across platform components.
Upgrade order - When performing an upgrade:
Upgrade the control plane Helm chart first
Upgrade the cluster Helm chart only after the control plane upgrade completes successfully
Helm
NVIDIA Run:ai requires Helm 3.14 or later. Before you continue, validate your installed helm client version. To install or upgrade Helm, see Installing Helm. If you are installing an air-gapped version of NVIDIA Run:ai, the NVIDIA Run:ai tar file contains the helm binary.
Software Files
Run the following commands to add the NVIDIA Run:ai Helm repository and browse the available versions:
helm repo add runai-backend https://runai.jfrog.io/artifactory/cp-charts-prod
helm repo update
helm search repo -l runai-backendRun the following command to browse all available air-gapped packages using the token provided by NVIDIA Run:ai.
To download and extract a specific version, and to upload the container images to your private registry, see the Preparations section.
curl -H "Authorization: Bearer <token>" "https://runai.jfrog.io/artifactory/api/storage/runai-airgapped-prod/?list"Upgrade Control Plane
System and Network Requirements
Before upgrading the NVIDIA Run:ai control plane, validate that the latest system requirements and network requirements are met, as they can change from time to time.
Upgrade
If your current version is 2.17 or higher, you can upgrade directly to the required version:
Upgrade Cluster
System and Network Requirements
Before upgrading the NVIDIA Run:ai cluster, validate that the latest system requirements and network requirements are met, as they can change from time to time.
Note
It is highly recommended to upgrade the Kubernetes version together with the NVIDIA Run:ai cluster version, to ensure compatibility with latest supported version of your Kubernetes distribution.
Getting Installation Instructions
Follow the setup and installation instructions below to get the installation instructions to upgrade the NVIDIA Run:ai cluster.
Note
To upgrade to a specific version, modify the --version flag by specifying the desired <VERSION>. You can find all available versions by using the helm search repo runai/runai-cluster --versions command.
Setup
In the NVIDIA Run:ai UI, go to Clusters
Select the cluster you want to upgrade
Click INSTALLATION INSTRUCTIONS
Optional: Select the NVIDIA Run:ai cluster version (latest, by default)
Click CONTINUE
Installation Instructions
Follow the installation instructions. Run the Helm commands provided on your Kubernetes cluster. See the below if installation fails.
Click DONE
Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded to the latest version.
Migrate from NGINX to HAProxy Ingress
Starting with v2.24, NVIDIA Run:ai recommends using HAProxy as the ingress controller. This change aligns with the announced retirement of the upstream NGINX Ingress Controller project. For more details, see the NGINX Ingress Controller retirement announcement.
Clusters upgraded from earlier versions typically already have NGINX installed. After upgrading to v2.24, follow the steps below to migrate ingress traffic from NGINX to HAProxy.
Check the Service Type of the Existing Ingress Controller
Before installing the HAProxy ingress controller, identify which ingress controller is currently in use. If your cluster already has an ingress controller installed, verify how it is exposed to avoid port or IP address conflicts.
If the existing ingress controller uses NodePort, note the HTTP/HTTPS NodePort values to ensure HAProxy is configured with non-overlapping ports.
If the existing ingress controller uses LoadBalancer, no additional action is required.
When running more than one ingress controller in the same cluster, port conflicts are relevant only for NodePort-based setups. LoadBalancer-based controllers automatically receive separate external IP addresses.
Note
If your setup differs from the examples above, adjust the configuration accordingly. When using external LoadBalancer on top of Ingress with service type NodePort, you may need to update external resources to route traffic to HAProxy’s configured NodePort values.
Install and Configure HAProxy Ingress Controller
Ingress controllers can be installed and configured in different ways depending on your Kubernetes distribution and how you expose services (for example, NodePort vs. LoadBalancer).
The sections below provide environment-specific Helm installation examples. Select the option that matches your deployment environment.
Note
OpenShift and RKE2 include a pre-installed ingress controller by default.
Vanilla Kubernetes
If your cluster already has an ingress controller installed (for example, NGINX) and it is exposed via NodePort, configure HAProxy to use different NodePort values so both controllers can run simultaneously.
Ensure the selected NodePort values do not overlap with ports already used by the existing ingress controller.
Managed Kubernetes (EKS, GKE, AKS)
When using a LoadBalancer, each ingress controller automatically receives its own external IP address from the cloud provider. This allows multiple ingress controllers to run in the same cluster without additional configuration.
Oracle Kubernetes Engine (OKE)
When using a LoadBalancer, each ingress controller automatically receives its own external IP address from the cloud provider. This allows multiple ingress controllers to run in the same cluster without additional configuration.
Verify HAProxy Ingress
After installing the HAProxy ingress controller, verify that HAProxy ingresses are reachable before switching NVIDIA Run:ai components to use it. You can do this by deploying a simple hello-world application.
To run the test, identify the IP address that should reach the cluster’s nodes in your environment.
Create a local
haproxy-test.ymlfile:Run the following command:
Once the application is deployed, access the cluster’s IP address in a browser. If the page displays “hello from haproxy-ingress”, HAProxy is functioning correctly and you can proceed with upgrading NVIDIA Run:ai.
Upgrade the Control Plane
Run the following Helm command to update the NVIDIA Run:ai control plane to use HAProxy instead of NGINX.
Upgrade the Cluster
Setup
In the NVIDIA Run:ai UI, go to Clusters
Select the cluster you want to upgrade
Click INSTALLATION INSTRUCTIONS
Click CONTINUE
Installation Instructions
Follow the installation instructions. Run the Helm commands provided on your Kubernetes cluster.
If not present, add the following flag to the helm install command:
Click DONE
Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded and the workloads in this cluster will now use HAProxy instead of NGINX.
Troubleshooting
If you encounter an issue with the cluster upgrade, use the troubleshooting scenarios below.
Installation Fails
If the NVIDIA Run:ai cluster upgrade fails, check the installation logs to identify the issue.
Run the following script to print the installation logs:
Cluster Status
If the NVIDIA Run:ai cluster upgrade completes, but the cluster status does not show as Connected, refer to Troubleshooting scenarios.
Last updated