Upgrade

Before Upgrade

Before proceeding with the upgrade, it's crucial to apply the specific prerequisites associated with your current version of NVIDIA Run:ai and every version in between up to the version you are upgrading to.

To ensure a smooth and supported upgrade process:

  • Align control plane and cluster versions - For best results, upgrade the control plane and cluster components to the same NVIDIA Run:ai version during the same maintenance window. Keeping versions aligned helps avoid unexpected behavior caused by version mismatches and ensures full compatibility across platform components.

  • Upgrade order - When performing an upgrade:

    • Upgrade the control plane Helm chart first

    • Upgrade the cluster Helm chart only after the control plane upgrade completes successfully

Helm

NVIDIA Run:ai requires Helmarrow-up-right 3.14 or later. Before you continue, validate your installed helm client version. To install or upgrade Helm, see Installing Helmarrow-up-right. If you are installing an air-gapped version of NVIDIA Run:ai, the NVIDIA Run:ai tar file contains the helm binary.

Software Files

Run the following commands to add the NVIDIA Run:ai Helm repository and browse the available versions:

helm repo add runai-backend https://runai.jfrog.io/artifactory/cp-charts-prod
helm repo update
helm search repo -l runai-backend

Upgrade Control Plane

System and Network Requirements

Before upgrading the NVIDIA Run:ai control plane, validate that the latest system requirements and network requirements are met, as they can change from time to time.

Upgrade

If your current version is 2.17 or higher, you can upgrade directly to the required version:

Upgrade Cluster

System and Network Requirements

Before upgrading the NVIDIA Run:ai cluster, validate that the latest system requirements and network requirements are met, as they can change from time to time.

circle-info

Note

It is highly recommended to upgrade the Kubernetes version together with the NVIDIA Run:ai cluster version, to ensure compatibility with latest supported version of your Kubernetes distribution.

Getting Installation Instructions

Follow the setup and installation instructions below to get the installation instructions to upgrade the NVIDIA Run:ai cluster.

circle-info

Note

To upgrade to a specific version, modify the --version flag by specifying the desired <VERSION>. You can find all available versions by using the helm search repo runai/runai-cluster --versions command.

Setup

  1. In the NVIDIA Run:ai UI, go to Clusters

  2. Select the cluster you want to upgrade

  3. Click INSTALLATION INSTRUCTIONS

  4. Optional: Select the NVIDIA Run:ai cluster version (latest, by default)

  5. Click CONTINUE

Installation Instructions

  1. Follow the installation instructions. Run the Helm commands provided on your Kubernetes cluster. See the below if installation fails.

  2. Click DONE

  3. Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded to the latest version.

Migrate from NGINX to HAProxy Ingress

Starting with v2.24, NVIDIA Run:ai recommends using HAProxy as the ingress controller. This change aligns with the announced retirement of the upstream NGINX Ingress Controller project. For more details, see the NGINX Ingress Controller retirement announcementarrow-up-right.

Clusters upgraded from earlier versions typically already have NGINX installed. After upgrading to v2.24, follow the steps below to migrate ingress traffic from NGINX to HAProxy.

Check the Service Type of the Existing Ingress Controller

Before installing the HAProxy ingress controller, identify which ingress controller is currently in use. If your cluster already has an ingress controller installed, verify how it is exposed to avoid port or IP address conflicts.

  • If the existing ingress controller uses NodePort, note the HTTP/HTTPS NodePort values to ensure HAProxy is configured with non-overlapping ports.

  • If the existing ingress controller uses LoadBalancer, no additional action is required.

When running more than one ingress controller in the same cluster, port conflicts are relevant only for NodePort-based setups. LoadBalancer-based controllers automatically receive separate external IP addresses.

circle-info

Note

If your setup differs from the examples above, adjust the configuration accordingly. When using external LoadBalancer on top of Ingress with service type NodePort, you may need to update external resources to route traffic to HAProxy’s configured NodePort values.

Install and Configure HAProxy Ingress Controller

Ingress controllers can be installed and configured in different ways depending on your Kubernetes distribution and how you expose services (for example, NodePort vs. LoadBalancer).

The sections below provide environment-specific Helm installation examples. Select the option that matches your deployment environment.

circle-info

Note

OpenShift and RKE2 include a pre-installed ingress controller by default.

chevron-rightVanilla Kuberneteshashtag

If your cluster already has an ingress controller installed (for example, NGINX) and it is exposed via NodePort, configure HAProxy to use different NodePort values so both controllers can run simultaneously.

Ensure the selected NodePort values do not overlap with ports already used by the existing ingress controller.

chevron-rightManaged Kubernetes (EKS, GKE, AKS)hashtag

When using a LoadBalancer, each ingress controller automatically receives its own external IP address from the cloud provider. This allows multiple ingress controllers to run in the same cluster without additional configuration.

chevron-rightOracle Kubernetes Engine (OKE)hashtag

When using a LoadBalancer, each ingress controller automatically receives its own external IP address from the cloud provider. This allows multiple ingress controllers to run in the same cluster without additional configuration.

Verify HAProxy Ingress

After installing the HAProxy ingress controller, verify that HAProxy ingresses are reachable before switching NVIDIA Run:ai components to use it. You can do this by deploying a simple hello-world application.

To run the test, identify the IP address that should reach the cluster’s nodes in your environment.

  1. Create a local haproxy-test.yml file:

  2. Run the following command:

Once the application is deployed, access the cluster’s IP address in a browser. If the page displays “hello from haproxy-ingress”, HAProxy is functioning correctly and you can proceed with upgrading NVIDIA Run:ai.

Upgrade the Control Plane

Run the following Helm command to update the NVIDIA Run:ai control plane to use HAProxy instead of NGINX.

Upgrade the Cluster

Setup

  1. In the NVIDIA Run:ai UI, go to Clusters

  2. Select the cluster you want to upgrade

  3. Click INSTALLATION INSTRUCTIONS

  4. Click CONTINUE

Installation Instructions

  1. Follow the installation instructions. Run the Helm commands provided on your Kubernetes cluster.

  2. If not present, add the following flag to the helm install command:

  3. Click DONE

  4. Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded and the workloads in this cluster will now use HAProxy instead of NGINX.

Troubleshooting

If you encounter an issue with the cluster upgrade, use the troubleshooting scenarios below.

Installation Fails

If the NVIDIA Run:ai cluster upgrade fails, check the installation logs to identify the issue.

Run the following script to print the installation logs:

Cluster Status

If the NVIDIA Run:ai cluster upgrade completes, but the cluster status does not show as Connected, refer to Troubleshooting scenarios.

Last updated