Deployment

Preparations

Before installing NVIDIA Run:ai, make sure you have reviewed the Preparations section and completed all tasks indicated in the Pre-installation checklist.

BCM Version

The instructions in this document are specific to BCM 11, with a minimum required version of 11.31.0.

Deploy Using the Wizard

  1. Access the active BCM head node via ssh:

    ssh root@<IP address of BCM head node>
  2. Verify the BCM version:

    cm-package-release-info -f cm-setup,cmdaemon
    
    Name      Version    Release(s)
    --------  ---------  ------------
    cm-setup  123542     11.31.0 (123360)
    cmdaemon  164056     11.31.0 (163803)
  3. Create the following files in the /cm/shared/runai/ directory populating each respectively from the linked content. Similarly, populate validation test files respective to the DGX platform:

  4. Verify that all files from the Preparations section and the step above have been created and are present:

    root@bcm11-headnode:~# ls -1 /cm/shared/runai/*
    
    # Example for GB300
    
    credential.jwt
    netop-values-gb300.yaml 
    nic-cluster-policy-gb300.yaml 
    combined-ippools-gb300.yaml
    combined-sriovibnet-gb300.yaml
    dra-test-gb200-gb300.yaml
    ib-test-gb200-gb300.yaml
    sriov-node-pool-config.yaml
    full-chain.pem
    private.key
    ca.crt # only required when using a local certificate authority
  5. Run the following command to initiate deployment via an interactive command-line assistant:

    cm-kubernetes-setup
  6. Select Deploy Kubernetes installation wizard and click Ok to proceed. If cm-kubernetes-setup is being run from GB200 or GB300, refer to the second screenshot:

  1. Select the relevant Kubernetes version. This guide, employing Base Command Manager 11.31.10, is based on and requires Kubernetes 1.34. Click Ok to proceed:

  1. The next step inquires if there’s a Docker Hub registry mirror available. It’s recommended that a local registry mirror be employed when available. For the purpose of this guide, leave the default value (blank) and click Ok to proceed:

  1. Insert values for the new Kubernetes cluster that NVIDIA Run:ai will be installed into. Click Ok to proceed:

    • The Kubernetes cluster name should be a short, unique name that can be used to distinguish between multiple clusters (i.e. k8s-user).

    • The k8s-user.local value for Kubernetes domain name is the default value for internal (within the Kubernetes cluster) name resolution and service discovery. It should be unique to distinguish it from the NMC cluster on DGX GB200 & GB300 SuperPODs. Common practice is to avoid using the same domain for the internal Kubernetes domain name and eternally referenceable FQDN to avoid potential name resolution inconsistencies.

    • The Kubernetes external FQDN field refers to the domain name that the Kubernetes API Server will be proxied at and will be automatically populated by BCM. If a valid name record (FQDN) for the BCM head node has been established prior that should be entered here. Please see the reference architecture section of the BCM Containerization Manual for details on how this is implemented via an NGINX proxy.

    • The Service network base address, Service network netmask bits, Pod network base address, & Pod network netmask bits fields provide CIDR ranges for Kubernetes service and pod networks. These will be pre-populated (taking care to avoid overlapping ranges from networks known to BCM) from private, non-routable ranges.

  1. The next step asks about exposing the Kubernetes API server to the external network. Select no and click Ok to proceed:

  1. The preferred internal network is used for Kubernetes intercommunication between ctrl plane and worker nodes. Select internalnet for the preferred internal network and click Ok to proceed:

  1. Select 3 or more Kubernetes master nodes. These should be the same nodes assigned to the control plane category. The screenshot below is for illustration only - the correct category should be k8s-system-user. See the BCM node categories section for more information. Click Ok to proceed:

Note

To ensure high availability and prevent a single point of failure, it is recommended to configure at least three Kubernetes master nodes in your cluster. The nodes selected at this stage will be employed to serve the needs of the control plane and should be located on CPU nodes. In contemporary Kubernetes versions, “master nodes” are referred to as control plane nodes.

  1. Select the worker node categories to operate as the Kubernetes worker nodes. The screenshot below is for illustration only - the correct category should be dgx-gb300-k8s, dgx-b300-k8s or similar and k8s-system-user. See the BCM node categories section for more information. Click Ok to proceed:

Note

Both the control plane nodes and the DGX nodes must be selected. Selecting the control plane nodes here allows select NVIDIA Run:ai services to run on the control plane nodes. If the cluster configuration has dedicated NVIDIA Run:ai system nodes as described in the optional Node Category section select that category here instead.

  1. Skip the selection of individual Kubernetes worker nodes (the category selected in the previous step will be used instead). The screenshot below is for illustration - the correct category at this step should be k8s-system-user. See the BCM node categories section for more information. Click Ok to proceed:

Note

In the combined steps 13 and 14 above, you must select from either:

  • A “node category” only (as described in this guide as k8s-system-user)

  • “Individual Kubernetes nodes” only (not generally recommended)

  • Or, a combination of both

  1. Select nodes for deploying etcd on. Make sure to select the same three nodes as the Kubernetes control plane nodes (Step 12). Click Ok to proceed:

  1. Leave the API server proxy port and etcd spool directory values at their prepopulated values (do not modify them). Click Ok to proceed:

Note

If there are multiple Kubernetes clusters being managed by BCM (such as in the case of DGX GB200 and GB300 SuperPODs), the default proxy port value will automatically be incremented to avoid an overlap with existing clusters and may not match the screenshot.

  1. Select Calico as the Kubernetes network plugin. Click Ok to proceed:

  1. Select no to installing the Kyverno Policy Engine and click Ok to proceed:

  1. The components selected in this screen represent those required by NVIDIA Run:ai for a self-hosted installation. Select the operator and NVIDIA Run:ai self-hosted options as depicted below. Click Ok to proceed:

    • NVIDIA GPU Operator

    • Grafana Operator

    • Ingress NGINX Controller

    • Knative Operator

    • Kubeflow Training Operator

    • Kubernetes Metrics Server

    • Kubernetes MPI Operator

    • Kubernetes State Metrics

    • LeaderWorkerSet Operator

    • MetalLB

    • Network Operator

    • NIM Operator (optional)

    • Prometheus Adapter

    • Prometheus Operator Stack

    • Run:ai (self-hosted)

  1. Provide the NVIDIA Run:ai configuration with the below and click Ok to proceed:

    • Run:ai Registry Credentials - Enter the path to a file containing the base64-encoded NVIDIA token. Alternatively the Base64 encoded value can be pasted in directly.

    • Run:ai Control Plane Domain Name (FQDN) - Enter the NVIDIA Run:ai control plane’s fully qualified domain name (e.g., runai.example.com). This value should be different from the FQDN entered on the first “Insert basic values” Kubernetes setup in Step 9. It should be what was used when creating certificates (and should not be the same as the BCM head node hostname).

    • Local CA Cert Path (.crt or .pem) - Path to the root CA certificate file if you are using a local CA–issued certificate (common in testing or internal environments). It’s optional if using a certificate from a public CA.

    • Domain Cert Path (.crt/.pem) - Path to the full-chain certificate for your domain (the domain’s leaf certificate followed by any intermediate certificates).

    • Domain Cert Key Path (.key) - Path to the private key that matches the domain certificate.

Note

It’s recommended to save all certificates, configuration files, and deployment artifacts into a persistent and accessible location in case of redeployment. The /cm/shared/runai/ directory referred to in this guide resides on a shared mount point and would be a suitable location. See the TLS certificates section for additional clarification.

  1. Select yes to install NVIDIA Run:ai components. Click Ok to proceed:

Note

In this version of the BCM installation assistant, a warning dialog indicating an ssh issue will follow - disregard and click Ok to proceed. Other indications at this stage may indicate a problem with the certs supplied.

  1. Select the k8s-system-user node category for the NVIDIA Run:ai control plane nodes and click Ok to proceed:

  1. Select the required NVIDIA GPU Operator version (v25.10.0). Click Ok to proceed:

  1. Select the required Network Operator (v25.7.0) version. Click Ok to proceed:

  1. Select the required NVIDIA Run:ai version (v2.23.x). Click Ok to proceed:

  1. When prompted to supply a Custom YAML config for the GPU Operator leave the default (blank) and click Ok to proceed:

  1. Configure the NVIDIA GPU Operator by selecting the following configuration parameters. Click Ok to proceed:

  1. Supply the path to the netop-values.yaml file that was created in Step 3. Click Ok to proceed:

  1. Select Do not use pre-defined at the GPU Operator configuration step. Click Ok to proceed:

  1. Click Ok for the MetalLB IP address pools page and it will automatically set up the requirements for NVIDIA Run:ai:

  1. Specify the ingress IP addresses prepared as documented in the Pre-installation checklist section. The mention of MetalLB here is an indication that these will be set up as part of a load balanced pool and assigned to each respective ingress. Click Ok to proceed:

  1. Select no to expose the Kubernetes Ingress to the default HTTPS port. Click Ok to proceed:

  1. Leave the node ports for the Ingress NGINX Controller at the pre-populated values (do not modify them) and click Ok to proceed:

  1. Select the serving option in the Knative Operator components dialog. Click Ok to proceed:

  1. If deploying onto an A100 or H100 only cluster, select yes. If deploying on any other cluster configuration select no. Click Ok to proceed:

Note

If applicable, Network Operator policies for DGX B200, DGX GB200 or later systems will be applied in a post-deployment step described below.

  1. If yes was selected for the previous step, select the appropriate option for the cluster and click Ok to proceed. In certain cases, this dialog may appear even if no is selected at the preceding step:

  1. Select yes to install the Permission Manager. Click Ok to proceed:

Note

The BCM Permission Manager coordinates security policy, system accounts, RBAC, and configures Kubernetes to employ BCM LDAP for user accounts. BCM User Accounts, however, are not automatically represented within NVIDIA Run:ai. For assistance with configuring NVIDIA Run:ai, see Set Up SSO with OpenID Connect. For more information on the BCM Permission Manager, see Containerization Manual documentation.

  1. Select Local path as the Kubernetes StorageClass. Ensure that both enabled and default are specified. Click Ok to proceed:

Note

The indication “local path” in the installation assistant may imply that local storage is employed, but those paths are pointing to NFS mountpoints. These were mounted as part of standard BCM node provisioning (e.g. /cm/shared and /home).

  1. Configure the CSI Provider (local-path-provisioner) to employ shared storage (/cm/shared/apps/kubernetes/k8s-user/var/volumes as a default). Click Ok to proceed:

  1. Select yes to enable local persistent storage for Grafana. Click Ok to proceed:

  1. Select Save config, set an accessible location for the config file (for example: /cm/share/runai/cm-kubernete-setup.conf) with the rest of the config files and then click Ok:

  1. After saving the config, select Exit and Ok to complete the wizard and return to the terminal:

The deployment process may require an extended period (60+ minutes). In order to prevent potential interruptions, failures, or network outages from disrupting the deployment process it’s recommended to perform the deployment from a persistent terminal session such as tmux or screen.

Note

During the deployment process all nodes that are members of the new Kubernetes cluster will be rebooted.

Connect to NVIDIA Run:ai User Interface

Upon completion of cm-kubernetes-setup, access NVIDIA Run:ai at the ingress IP or hostname specified earlier (e.g. runai.example.com). The default NVIDIA Run:ai credentials required for login are:

You will be prompted to change the password.

Note

It is critical for security reasons that upon first login a new admin user is created with a secret password and the initial default credentials are changed or the test user deleted.

On first access, administrators are presented with an optional onboarding wizard that helps with initial setup tasks. The onboarding wizard can guide you through:

  • Configuring single sign-on (SSO)

  • Inviting the first research team

You can choose to complete or skip the onboarding wizard and perform these actions later.

Post-wizard Deployment Steps

After the BCM installation assistant completes additional steps are required.

If multiple Kubernetes clusters are configured in this instance of BCM, load the correct Kubernetes module before running all post-wizard commands:

NVIDIA Dynamic Resource Allocation (DRA) Driver

The NVIDIA DRA Driver for GPUs extends how NVIDIA GPUs are consumed within Kubernetes. This is required to enable secure Internode Memory Exchange (IMEX) on Multi-Node NVLink (MNNVL) systems (e.g. GB200, GB300) for Kubernetes workloads and should be included with all NVIDIA GPU systems.

  1. Install using Helm:

  2. GB200 and GB300 only - Create a dra-test-gb200-gb300.yaml file in /cm/shared/runai from the Validation tests and update the clique ID to match a clique ID from the cluster. The following test addresses single rack NVL72 clusters. For multi-rack systems, you’ll need to adjust podAffinity (e.g. topologyKey: nvidia.com/gpu.clique):

  3. GB200 and GB300 only - Validate the test successfully completed and inspect the logs of the launcher:

  4. GB200 and GB300 only - Cleanup test:

The default NVIDIA Run:ai configuration does not expose DRA features. After installing the DRA components, this can be enabled by modifying the runaiconfig in the cluster. See Advanced cluster configurations for more details:

Instructions for validating the change and reverting if necessary:

Configure the Network Operator

In version 11.31.0 of the BCM installation assistant, the Network Operator requires additional configuration on DGX B200 / GB200 & B300 / GB300 SuperPOD / BasePOD systems. While the operator is installed in a preceding step, it does not automatically initialize or configure SR-IOV and secondary network plugins.

The following CRD resources have to be created in the exact order as below:

  • SR-IOV Network Policies for each NVIDIA InfiniBand NIC

  • An nvIPAM IP address pool

  • SR-IOV InfiniBand networks

  1. Create SR-IOV network node policies using the nic-cluster-policy.yaml that was created in an earlier step:

  2. Create an IPAM IP Pool using the combined-ippools.yaml that was created in an earlier step:

  3. Create the SR-IOV IB networks using the combined-sriovbnet.yaml that was created in an earlier step:

  4. Create the SR-IOV Node Pool configuration using the sriov-node-pool-config.yaml appropriate for the DGX platform:

Note

This will typically reconfigure NICs and may result in a node reboot. The supplied YAML sets the maxUnavailable field to 20%. This value should be adjusted to align with your operational requirements. A value of 1 would have the effect of serializing the upgrade and would result in blocking upon a single node failure. It may be appropriate for a small lab deployment to set it to 100%. This would prevent any single machine failure from blocking the remaining nodes from upgrading. For larger clusters, setting the value to a lower percentage means that the upgrade process will be effectively split into batches.

  1. Validate by describing one of the DGX nodes and checking for SRIOV devices:

Note

It might take several minutes for these settings to take effect. If the sriovnetworkconfig daemon changes the NIC config, then a node reboot will occur.

  1. Validate by running the DGX SuperPOD platform specific tests - Validation tests:

    1. For GB200 & GB300 - ib-test-gb200-gb300.yaml:

    2. For B200 & B300 - ib-test-b200-b300.yaml:

Note

The Network Operator will restart the DGX nodes if the number of Virtual Functions in the SR-IOV Network Policy file does not match the NVIDIA/Mellanox firmware configuration.

(Optional) Apply Security Policies

By default, BCM Kubernetes deployment has permissive security policies to ease in development environments. For production clusters or in secure environments, it’s recommended to take additional steps to harden the cluster. This includes steps such as configuring permission manager, applying Kyverno policies, and applying Calico policies.

For deployments of NVIDIA Run:ai as a part of NVIDIA Mission Control, please reach out to your NVIDIA representative for the latest example configurations and suggested policies. The Mission Control software installation guide’s Kubernetes Security Hardening documentation provides guidance for application and links for obtaining the latest policy manifests.

(Optional) Create Node Pools

See Node pools to create and manage groups of nodes (either by predefined node label or administrator-defined node labels). This optional configuration step can be used for advanced deployment scenarios to allocate different resources across teams or projects.

(Optional) Add Additional Users

See Users for steps on adding additional users beyond the initially created account or configuring SSO authentication.

(Optional) Install the NVIDA Run:ai Command Line

To obtain the command line binary, see the Install and configure CLI section.

Test the Command Line Tool Installation

Validate the installation by running the following command:

Note

If NVIDIA Run:ai had previously been installed via BCM, it may be necessary to update the command line version.

Set the Control Plane URL

The following step is required for Windows users only. Linux and Mac clients are configured via the installation script.

Run the following command (substituting the NVIDIA Run:ai control plane FQDN value specified in previous steps) to create the config.json file in the default path:

Alternative, the Base Command Manager installation assistant can generate this config with the following steps:

Validate NVIDIA Run:ai

To validate the installation, please refer to the quick start guides for deploying single-GPU training jobs, multi-node training jobs, single-GPU inference jobs, and multi-GPU inference jobs. Certain NGC workloads may require adding NGC API keys and docker credentials into the cluster.

  1. Validate the ingress IP for NVIDIA Run:ai inference is configured. EXTERNAL-IP should have the value configured in the prior MetalLB steps:

  2. Validate distributed training workloads, see Run you first distributed training workload.

  3. Validate distributed inference workloads, see Run your first custom inference workload.

Troubleshooting Common Issues

Slow installation

Provide a registry mirror when requested in the wizard. If one isn’t available, authenticated access to Docker Hub can avoid potential rate limiting for at least some of the artifact pulls:

Delayed responsiveness from the cmsh command

If encountering slow response when running the cmsh command, try using the cmsh-lazy-load command (substituting it for cmsh wherever referenced in the above deployment steps).

Failed installation

If encountering issues with installation failure (which should be evident immediately) ensure that the DGX node kernel parameters are not inadvertently forcing Cgroup v1 vs Cgroup v2:

Shared Storage (NFS) configuration

If encountering issues indicating problems consistently accessing Persistent Volumes (PVs) ensure that NFSv3 for /cm/shared for both of the node categories that’ll be used later in this guide. For example (please substitute the category name as appropriate for the DGX type):

MetalLB Load Balancer manual installation

Since there’s shared use of CPU nodes for the combined control plane elements in this architecture, BCM configures MetalLB and adjusts node labels to run. The following would be required as a manual step when deploying MetalLB in this manner:

Note

The above is not required when using the BCM installation assistant. It’s included here to assist with alternative deployment approaches on DGX SuperPOD / BasePOD.

NVIDIA Run:ai exact version selection

The BCM installation assistant will pull the latest NVIDIA Run:ai patch release available for the minor version selected. The following can be used to indicate which version will be installed:

Last updated