# Node Roles

This article explains how to designate specific node roles in a Kubernetes cluster to ensure optimal performance and reliability in production deployments.

For optimal performance in production clusters, it is essential to avoid extensive CPU usage on GPU nodes where possible. This can be done by ensuring the following:

* NVIDIA Run:ai system-level services run on dedicated CPU-only nodes.
* Workloads that do not request GPU resources (e.g. Machine Learning jobs) are executed on CPU-only nodes.

NVIDIA Run:ai services are scheduled on the defined node roles by applying [Kubernetes Node Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) using node labels .

## Prerequisites

To perform these tasks, make sure to install the NVIDIA Run:ai [Administrator CLI](https://run-ai-docs.nvidia.com/self-hosted/2.20/reference/cli/administrator-cli).

## Configure Node Roles

The following node roles can be configured on the cluster:

* **System node:** Reserved for NVIDIA Run:ai system-level services.
* **GPU Worker node:** Dedicated for GPU-based workloads.
* **CPU Worker node:** Used for CPU-only workloads.

### System Nodes

NVIDIA Run:ai system nodes run system-level services required to operate. This can be done via the [Kubectl](https://kubernetes.io/docs/reference/kubectl/) (*preferred method*) or via NVIDIA Run:ai [Administrator CLI](https://run-ai-docs.nvidia.com/self-hosted/2.20/reference/cli/administrator-cli).

By default, NVIDIA Run:ai applies a node affinity rule to prefer nodes that are labeled with `node-role.kubernetes.io/runai-system` for system services scheduling. You can modify the default node affinity rule by:

* Editing the `spec.global.affinity` configuration parameter as detailed in [Advanced cluster configurations](https://run-ai-docs.nvidia.com/self-hosted/2.20/infrastructure-setup/advanced-setup/cluster-config).
* Editing the `global.affinity` configuration as detailed in [Advanced control plane configurations](https://run-ai-docs.nvidia.com/self-hosted/2.20/infrastructure-setup/advanced-setup/control-plane-config) for self-hosted deployments

{% hint style="info" %}
**Note**

* To ensure [high availability](https://run-ai-docs.nvidia.com/self-hosted/2.20/infrastructure-setup/procedures/high-availability) and prevent a single point of failure, it is recommended to configure at least three system nodes in your cluster.
* By default, Kubernetes master nodes are configured to prevent workloads from running on them as a best-practice measure to safeguard control plane stability. While this restriction is generally recommended, certain NVIDIA reference architectures allow adding tolerations to the NVIDIA Run:ai deployment so critical system services can run on these nodes.
  {% endhint %}

#### Kubectl

To set a system role for a node in your Kubernetes cluster using Kubectl, follow these steps:

1. Use the `kubectl get nodes` command to list all the nodes in your cluster and identify the name of the node you want to modify.
2. Run one of the following commands to label the node with its role:

   ```bash
   kubectl label nodes <node-name> node-role.kubernetes.io/runai-system=true
   kubectl label nodes <node-name> node-role.kubernetes.io/runai-system=false
   ```

#### NVIDIA Run:ai Administrator CLI

{% hint style="info" %}
**Note**

The NVIDIA Run:ai Administrator CLI only supports the default node affinity.
{% endhint %}

To set a system role for a node in your Kubernetes cluster, follow these steps:

1. Run the `kubectl get nodes` command to list all the nodes in your cluster and identify the name of the node you want to modify.
2. Run one of the following commands to set or remove a node’s role

   ```bash
   runai-adm set node-role --runai-system-worker <node-name>
   runai-adm remove node-role --runai-system-worker <node-name>
   ```

The `set node-role` command will label the node and set relevant cluster configurations.

### Worker Nodes

NVIDIA Run:ai worker nodes run user-submitted workloads and system-level [DeamonSets](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) required to operate. This can be managed via the [Kubectl](https://kubernetes.io/docs/reference/kubectl/) (*preferred method*) or via NVIDIA Run:ai [Administrator CLI](https://run-ai-docs.nvidia.com/self-hosted/2.20/reference/cli/administrator-cli),

By default, GPU workloads are scheduled on GPU nodes based on the `nvidia.com/gpu.present` label. When `global.nodeAffinity.restrictScheduling` is set to true via the [Advanced cluster configurations](https://run-ai-docs.nvidia.com/self-hosted/2.20/infrastructure-setup/advanced-setup/cluster-config):

* GPU Workloads are scheduled with node affinity rule to require nodes that are labeled with `node-role.kubernetes.io/runai-gpu-worker`
* CPU-only Workloads are scheduled with node affinity rule to require nodes that are labeled with `node-role.kubernetes.io/runai-cpu-worker`

#### Kubectl

To set a worker role for a node in your Kubernetes cluster using Kubectl, follow these steps:

1. Validate the `global.nodeAffinity.restrictScheduling` is set to true in the cluster’s [Configurations](https://run-ai-docs.nvidia.com/self-hosted/2.20/infrastructure-setup/advanced-setup/cluster-config).
2. Use the `kubectl get nodes` command to list all the nodes in your cluster and identify the name of the node you want to modify.
3. Run one of the following commands to label the node with its role. Replace the label and value (`true`/`false`) to enable or disable GPU/CPU roles as needed:

   ```bash
   kubectl label nodes <node-name> node-role.kubernetes.io/runai-gpu-worker=true
   kubectl label nodes <node-name> node-role.kubernetes.io/runai-cpu-worker=false
   ```

#### NVIDIA Run:ai Administrator CLI

To set worker role for a node in your Kubernetes cluster via NVIDIA Run:ai [Administrator CLI](https://run-ai-docs.nvidia.com/self-hosted/2.20/reference/cli/administrator-cli), follow these steps:

1. Use the `kubectl get nodes` command to list all the nodes in your cluster and identify the name of the node you want to modify.
2. Run one of the following commands to set or remove a node’s role. `<node-role>` must be either `--gpu-worker` or `--cpu-worker` :

   ```bash
   runai-adm set node-role <node-role> <node-name>
   runai-adm remove node-role <node-role> <node-name>
   ```

The `set node-role` command will label the node and set cluster configuration `global.nodeAffinity.restrictScheduling` true.

{% hint style="info" %}
**Note**

Use the `--all` flag to set or remove a role to all nodes.
{% endhint %}
