Upgrade

circle-info

Note

Starting with v2.24, NVIDIA Run:ai artifacts are available on NVIDIA NGC. NGC is the recommended artifact source for all customers. JFrog remains supported in v2.24 but will be removed in a future release. See the Software files section below for guidance on which path to use.

Before Upgrade

Before proceeding with the upgrade, it's crucial to apply the specific prerequisites associated with your current version of NVIDIA Run:ai and every version in between up to the version you are upgrading to. If your current version is 2.17 or higher, you can upgrade directly to the required version.

To ensure a smooth and supported upgrade process:

  • Align control plane and cluster versions - For best results, upgrade the control plane and cluster components to the same NVIDIA Run:ai version during the same maintenance window. Keeping versions aligned helps avoid unexpected behavior caused by version mismatches and ensures full compatibility across platform components.

  • Upgrade order - When performing an upgrade:

    • Upgrade the control plane Helm chart first

    • Upgrade the cluster Helm chart only after the control plane upgrade completes successfully

Helm

NVIDIA Run:ai requires Helmarrow-up-right 3.14 or later (verified by helm version --short) . Before you continue, validate your installed helm client version. To install or upgrade Helm, see Installing Helmarrow-up-right. If you are installing an air-gapped version of NVIDIA Run:ai, the NVIDIA Run:ai tar file contains the helm binary.

Software Files

Starting with v2.24, NVIDIA Run:ai artifacts are available on both NVIDIA NGC and JFrog. Deployments that were originally installed using JFrog can also be upgraded using NGC. As JFrog support will be deprecated in a future release, upgrading via NGC is the recommended approach, provided that you have an NGC API key.

Use the tab that matches your environment:

  • NGC (Recommended) - To upgrade using NGC, complete the Preparations section first and make sure you have NGC API key.

  • JFrog - Existing customers may chose to continue upgrading from JFrog or to upgrade from NGC.

Before upgrading, complete the steps in the Preparations section to set up your image pull secret.

Run the following commands to add the NVIDIA Run:ai Helm repository and browse the available versions:

Upgrade the Control Plane

System and Network Requirements

Before upgrading the NVIDIA Run:ai control plane, validate that the latest system requirements and network requirements are met:

Upgrade

If your current version is 2.17 or higher, you can upgrade directly to the required version:

Upgrade the Cluster

System and Network Requirements

Before upgrading the NVIDIA Run:ai cluster, validate that the latest system requirements and network requirements are met:

circle-info

Note

It is highly recommended to upgrade the Kubernetes version together with the NVIDIA Run:ai cluster version, to ensure compatibility with latest supported version of your Kubernetes distribution.

Getting Installation Instructions

Follow the setup and installation instructions below to get the installation instructions to upgrade the NVIDIA Run:ai cluster.

circle-info

Note

If your control plane was upgraded using NGC, the cluster must also be upgraded using NGC. See the NGC tab in the Installation instructions section below.

Setup

  1. In the NVIDIA Run:ai UI, go to Resources -> Clusters

  2. Select the cluster you want to upgrade

  3. Click INSTALLATION INSTRUCTIONS

  4. Optional: Select the NVIDIA Run:ai cluster version (latest, by default)

  5. Click CONTINUE

Installation Instructions

  1. Modify the UI-generated command as follows:

    • Add --username='$oauthtoken' and --password=<NGC_API_KEY> to the helm repo add command, and replace <NGC_API_KEY> with your NGC API key.

    • If you are using a local certificate authority, add --set global.customCA.enabled=true to the Helm command as described in the Local certificate authority section.

  2. Click DONE

  3. Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded to the latest version.

Migrate from NGINX to HAProxy Ingress

circle-info

Note

This section applies to Kubernetes only. OpenShift includes a pre-installed ingress controller by default and does not require this migration.

Starting with v2.24, NVIDIA Run:ai recommends using HAProxy as the ingress controller. This change aligns with the announced retirement of the upstream NGINX Ingress Controller project. For more details, see the NGINX Ingress Controller retirement announcementarrow-up-right.

Clusters upgraded from earlier versions typically already have NGINX installed. After upgrading to v2.24, follow the steps below to migrate ingress traffic from NGINX to HAProxy.

Check the Service Type of the Existing Ingress Controller

Before installing the HAProxy ingress controller, identify which ingress controller is currently in use. If your cluster already has an ingress controller installed, verify how it is exposed to avoid port or IP address conflicts.

  • If the existing ingress controller uses NodePort, note the HTTP/HTTPS NodePort values to ensure HAProxy is configured with non-overlapping ports.

  • If the existing ingress controller uses LoadBalancer, no additional action is required.

When running more than one ingress controller in the same cluster, port conflicts are relevant only for NodePort-based setups. LoadBalancer-based controllers automatically receive separate external IP addresses.

circle-info

Note

If your setup differs from the examples above, adjust the configuration accordingly. When using external LoadBalancer on top of Ingress with service type NodePort, you may need to update external resources to route traffic to HAProxy’s configured NodePort values.

Install and Configure HAProxy Ingress Controller

Ingress controllers can be installed and configured in different ways depending on your Kubernetes distribution and how you expose services (for example, NodePort vs. LoadBalancer).

The sections below provide environment-specific Helm installation examples. Select the option that matches your deployment environment.

circle-info

Note

OpenShift and RKE2 include a pre-installed ingress controller by default.

chevron-rightVanilla Kuberneteshashtag

If your cluster already has an ingress controller installed (for example, NGINX) and it is exposed via NodePort, configure HAProxy to use different NodePort values so both controllers can run simultaneously.

Ensure the selected NodePort values do not overlap with ports already used by the existing ingress controller.

chevron-rightManaged Kubernetes (EKS, GKE, AKS)hashtag

When using a LoadBalancer, each ingress controller automatically receives its own external IP address from the cloud provider. This allows multiple ingress controllers to run in the same cluster without additional configuration.

chevron-rightOracle Kubernetes Engine (OKE)hashtag

When using a LoadBalancer, each ingress controller automatically receives its own external IP address from the cloud provider. This allows multiple ingress controllers to run in the same cluster without additional configuration.

Verify HAProxy Ingress

After installing the HAProxy ingress controller, verify that HAProxy ingresses are reachable before switching NVIDIA Run:ai components to use it. You can do this by deploying a simple hello-world application.

To run the test, identify the IP address that should reach the cluster’s nodes in your environment.

  1. Create a local haproxy-test.yml file:

  2. Run the following command:

Once the application is deployed, access the cluster’s IP address in a browser. If the page displays “hello from haproxy-ingress”, HAProxy is functioning correctly and you can proceed with upgrading NVIDIA Run:ai.

Upgrade the Control Plane with HAProxy

Run the following Helm command to update the NVIDIA Run:ai control plane to use HAProxy instead of NGINX.

Upgrade the Cluster with HAProxy

Setup

  1. In the NVIDIA Run:ai UI, go to Resources -> Clusters

  2. Select the cluster you want to upgrade

  3. Click INSTALLATION INSTRUCTIONS

  4. Click CONTINUE

Installation Instructions

  1. Follow the installation instructions. Run the Helm commands provided on your Kubernetes cluster.

  2. If not present, add the following flag to the helm install command:

  3. Click DONE

  4. Once installation is complete, validate the cluster is Connected and listed with the new cluster version (see the cluster troubleshooting scenarios). Once you have done this, the cluster is upgraded and the workloads in this cluster will now use HAProxy instead of NGINX.

Troubleshooting

If you encounter an issue with the cluster upgrade, use the troubleshooting scenarios below.

Installation Fails

If the NVIDIA Run:ai cluster upgrade fails, check the installation logs to identify the issue.

Run the following script to print the installation logs:

Cluster Status

If the NVIDIA Run:ai cluster upgrade completes, but the cluster status does not show as Connected, refer to Troubleshooting scenarios.

Last updated