Node roles

This article explains how to designate specific node roles in a Kubernetes cluster to ensure optimal performance and reliability in production deployments.

For optimal performance in production clusters, it is essential to avoid extensive CPU usage on GPU nodes where possible. This can be done by ensuring the following:

  • NVIDIA Run:ai system-level services run on dedicated CPU-only nodes.

  • Workloads that do not request GPU resources (e.g. Machine Learning jobs) are executed on CPU-only nodes.

NVIDIA Run:ai services are scheduled on the defined node roles by applying Kubernetes Node Affinity using node labels .

Prerequisites

To perform these tasks, make sure to install the NVIDIA Run:ai Administrator CLI.

Configure Node Roles

The following node roles can be configured on the cluster:

  • System node: Reserved for NVIDIA Run:ai system-level services.

  • GPU Worker node: Dedicated for GPU-based workloads.

  • CPU Worker node: Used for CPU-only workloads.

System nodes

NVIDIA Run:ai system nodes run system-level services required to operate. This can be done via the Kubectl (recommended) or via NVIDIA Run:ai Administrator CLI.

By default, NVIDIA Run:ai applies a node affinity rule to prefer nodes that are labeled with node-role.kubernetes.io/runai-system for system services scheduling. You can modify the default node affinity rule by:

Note

To ensure high availability and prevent a single point of failure, it is recommended to configure at least three system nodes in your cluster.

Important

Kubectl

To set a system role for a node in your Kubernetes cluster using Kubectl, follow these steps:

  1. Use the kubectl get nodes command to list all the nodes in your cluster and identify the name of the node you want to modify.

  2. Run one of the following commands to label the node with its role:

    kubectl label nodes <node-name> [node-role.kubernetes.io/runai-system=true]
    kubectl label nodes <node-name> [node-role.kubernetes.io/runai-system=false]

NVIDIA Run:ai Administrator CLI

Note

The NVIDIA Run:ai Administrator CLI only supports the default node affinity.

To set a system role for a node in your Kubernetes cluster, follow these steps:

  1. Run the kubectl get nodes command to list all the nodes in your cluster and identify the name of the node you want to modify.

  2. Run one of the following commands to set or remove a node’s role:

    runai-adm set node-role --runai-system-worker <node-name>
    runai-adm remove node-role --runai-system-worker <node-name>

The set node-role command will label the node and set relevant cluster configurations.

Worker nodes

NVIDIA Run:ai worker nodes run user-submitted workloads and system-level DeamonSets required to operate. This can be managed via the Kubectl (recommended) or via NVIDIA Run:ai Administrator CLI.

By default, GPU workloads are scheduled on GPU nodes baed on the nvidia.com/gpu.present label. When global.nodeAffinity.restrictScheduling is set to true via the Advanced cluster configurations:

  • GPU Workloads are scheduled with node affinity rule to require nodes that are labeled with node-role.kubernetes.io/runai-gpu-worker

  • CPU-only Workloads are scheduled with node affinity rule to require nodes that are labeled with node-role.kubernetes.io/runai-cpu-worker

Kubectl

To set a worker role for a node in your Kubernetes cluster using Kubectl, follow these steps:

  1. Validate the global.nodeAffinity.restrictScheduling is set to true in the cluster’s Configurations.

  2. Use the kubectl get nodes command to list all the nodes in your cluster and identify the name of the node you want to modify.

  3. Run one of the following commands to label the node with its role:

    kubectl label nodes <node-name> [node-role.kubernetes.io/runai-gpu-worker=true | node-role.kubernetes.io/runai-cpu-worker=true]
    kubectl label nodes <node-name> [node-role.kubernetes.io/runai-gpu-worker=false | node-role.kubernetes.io/runai-cpu-worker=false]

NVIDIA Run:ai Administrator CLI

To set worker role for a node in your Kubernetes cluster via NVIDIA Run:ai Administrator CLI, follow these steps:

  1. Use the kubectl get nodes command to list all the nodes in your cluster and identify the name of the node you want to modify.

  2. Run one of the following commands to set or remove a node’s role:

     runai-adm set node-role [--gpu-worker | --cpu-worker] <node-name>
     runai-adm remove node-role [--gpu-worker | --cpu-worker] <node-name>

The set node-role command will label the node and set cluster configuration global.nodeAffinity.restrictScheduling true.

Note

Use the --all flag to set or remove a role to all nodes.

Last updated