Upgrade
Note
Starting with v2.24, NVIDIA Run:ai artifacts are available on NVIDIA NGC. NGC is the recommended artifact source for all customers. JFrog remains supported in v2.24 but will be removed in a future release. See the Software files section below for guidance on which path to use.
Before Upgrade
Before proceeding with the upgrade, it's crucial to apply the specific prerequisites associated with your current version of NVIDIA Run:ai and every version in between up to the version you are upgrading to. If your current version is 2.17 or higher, you can upgrade directly to the required version.
To ensure a smooth and supported upgrade process:
Align control plane and cluster versions - For best results, upgrade the control plane and cluster components to the same NVIDIA Run:ai version during the same maintenance window. Keeping versions aligned helps avoid unexpected behavior caused by version mismatches and ensures full compatibility across platform components.
Upgrade order - When performing an upgrade:
Upgrade the control plane Helm chart first
Upgrade the cluster Helm chart only after the control plane upgrade completes successfully
Helm
NVIDIA Run:ai requires Helm 3.14 or later (verified by helm version --short) . Before you continue, validate your installed helm client version. To install or upgrade Helm, see Installing Helm. If you are installing an air-gapped version of NVIDIA Run:ai, the NVIDIA Run:ai tar file contains the helm binary.
Software Files
Starting with v2.24, NVIDIA Run:ai artifacts are available on both NVIDIA NGC and JFrog. Deployments that were originally installed using JFrog can also be upgraded using NGC. As JFrog support will be deprecated in a future release, upgrading via NGC is the recommended approach, provided that you have an NGC API key.
Use the tab that matches your environment:
NGC (Recommended) - To upgrade using NGC, complete the Preparations section first and make sure you have NGC API key.
JFrog - Existing customers may chose to continue upgrading from JFrog or to upgrade from NGC.
Before upgrading, complete the steps in the Preparations section to set up your image pull secret.
Run the following commands to add the NVIDIA Run:ai Helm repository and browse the available versions:
Run the following commands to add the NVIDIA Run:ai Helm repository and browse the available versions:
To download and extract a specific version, and to upload the container images to your private registry using NGC, see the Preparations section.
Run the following command to browse all available air-gapped packages using the token provided by NVIDIA Run:ai.
To download and extract a specific version, and to upload the container images to your private registry, see the Preparations section.
Upgrade the Control Plane
System and Network Requirements
Before upgrading the NVIDIA Run:ai control plane, validate that the latest system requirements and network requirements are met:
Upgrade
If your current version is 2.17 or higher, you can upgrade directly to the required version:
The following command applies whether you downloaded the package via NGC or JFrog:
Upgrade the Cluster
System and Network Requirements
Before upgrading the NVIDIA Run:ai cluster, validate that the latest system requirements and network requirements are met:
Note
It is highly recommended to upgrade the Kubernetes version together with the NVIDIA Run:ai cluster version, to ensure compatibility with latest supported version of your Kubernetes distribution.
Getting Installation Instructions
Follow the setup and installation instructions below to get the installation instructions to upgrade the NVIDIA Run:ai cluster.
Note
If your control plane was upgraded using NGC, the cluster must also be upgraded using NGC. See the NGC tab in the Installation instructions section below.
Setup
In the NVIDIA Run:ai UI, go to Resources -> Clusters
Select the cluster you want to upgrade
Click INSTALLATION INSTRUCTIONS
Optional: Select the NVIDIA Run:ai cluster version (latest, by default)
Click CONTINUE
Installation Instructions
Modify the UI-generated command as follows:
Add
--username='$oauthtoken'and--password=<NGC_API_KEY>to thehelm repo addcommand, and replace<NGC_API_KEY>with your NGC API key.If you are using a local certificate authority, add
--set global.customCA.enabled=trueto the Helm command as described in the Local certificate authority section.
Click DONE
Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded to the latest version.
Follow the installation instructions. Run the Helm commands provided on your Kubernetes cluster. If you are using a local certificate authority, add
--set global.customCA.enabled=trueto the Helm command as described in the Local certificate authority section. See the below if installation fails.Click DONE
Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded to the latest version.
The following instructions apply whether you downloaded the package via NGC or JFrog.
The NVIDIA Run:ai platform displays the Helm upgrade command in the cluster wizard. — Do not run the command exactly as shown in the UI —
Update the UI-generated Helm command as follows:
Do not add the Helm repository — skip
helm repo addandhelm repo update.Replace
runai/runai-clusterwithrunai-cluster-<VERSION>.tgz.Add
--set global.image.registry=<DOCKER_REGISTRY_ADDRESS>where<DOCKER_REGISTRY_ADDRESS>is the Docker registry address configured in the Preparations section.Add
--set clusterConfig.prometheus.spec.baseImage=<DOCKER_REGISTRY_ADDRESS>/<FULL_IMAGE_PATH>.Add
--set global.customCA.enabled=trueas described in the Local certificate authority section.Keep the remaining
--setvalues exactly as generated by the UI.
Click DONE
Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded to the latest version.
Migrate from NGINX to HAProxy Ingress
Note
This section applies to Kubernetes only. OpenShift includes a pre-installed ingress controller by default and does not require this migration.
Starting with v2.24, NVIDIA Run:ai recommends using HAProxy as the ingress controller. This change aligns with the announced retirement of the upstream NGINX Ingress Controller project. For more details, see the NGINX Ingress Controller retirement announcement.
Clusters upgraded from earlier versions typically already have NGINX installed. After upgrading to v2.24, follow the steps below to migrate ingress traffic from NGINX to HAProxy.
Check the Service Type of the Existing Ingress Controller
Before installing the HAProxy ingress controller, identify which ingress controller is currently in use. If your cluster already has an ingress controller installed, verify how it is exposed to avoid port or IP address conflicts.
If the existing ingress controller uses NodePort, note the HTTP/HTTPS NodePort values to ensure HAProxy is configured with non-overlapping ports.
If the existing ingress controller uses LoadBalancer, no additional action is required.
When running more than one ingress controller in the same cluster, port conflicts are relevant only for NodePort-based setups. LoadBalancer-based controllers automatically receive separate external IP addresses.
Note
If your setup differs from the examples above, adjust the configuration accordingly. When using external LoadBalancer on top of Ingress with service type NodePort, you may need to update external resources to route traffic to HAProxy’s configured NodePort values.
Install and Configure HAProxy Ingress Controller
Ingress controllers can be installed and configured in different ways depending on your Kubernetes distribution and how you expose services (for example, NodePort vs. LoadBalancer).
The sections below provide environment-specific Helm installation examples. Select the option that matches your deployment environment.
Note
OpenShift and RKE2 include a pre-installed ingress controller by default.
Vanilla Kubernetes
If your cluster already has an ingress controller installed (for example, NGINX) and it is exposed via NodePort, configure HAProxy to use different NodePort values so both controllers can run simultaneously.
Ensure the selected NodePort values do not overlap with ports already used by the existing ingress controller.
Managed Kubernetes (EKS, GKE, AKS)
When using a LoadBalancer, each ingress controller automatically receives its own external IP address from the cloud provider. This allows multiple ingress controllers to run in the same cluster without additional configuration.
Oracle Kubernetes Engine (OKE)
When using a LoadBalancer, each ingress controller automatically receives its own external IP address from the cloud provider. This allows multiple ingress controllers to run in the same cluster without additional configuration.
Verify HAProxy Ingress
After installing the HAProxy ingress controller, verify that HAProxy ingresses are reachable before switching NVIDIA Run:ai components to use it. You can do this by deploying a simple hello-world application.
To run the test, identify the IP address that should reach the cluster’s nodes in your environment.
Create a local
haproxy-test.ymlfile:Run the following command:
Once the application is deployed, access the cluster’s IP address in a browser. If the page displays “hello from haproxy-ingress”, HAProxy is functioning correctly and you can proceed with upgrading NVIDIA Run:ai.
Upgrade the Control Plane with HAProxy
Run the following Helm command to update the NVIDIA Run:ai control plane to use HAProxy instead of NGINX.
Upgrade the Cluster with HAProxy
Setup
In the NVIDIA Run:ai UI, go to Resources -> Clusters
Select the cluster you want to upgrade
Click INSTALLATION INSTRUCTIONS
Click CONTINUE
Installation Instructions
Follow the installation instructions. Run the Helm commands provided on your Kubernetes cluster.
If not present, add the following flag to the helm install command:
Click DONE
Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded and the workloads in this cluster will now use HAProxy instead of NGINX.
Troubleshooting
If you encounter an issue with the cluster upgrade, use the troubleshooting scenarios below.
Installation Fails
If the NVIDIA Run:ai cluster upgrade fails, check the installation logs to identify the issue.
Run the following script to print the installation logs:
Cluster Status
If the NVIDIA Run:ai cluster upgrade completes, but the cluster status does not show as Connected, refer to Troubleshooting scenarios.
Last updated