# Monitor Workloads by Category

A workload category represents the role or purpose of a workload such as training, building, or deploying models. Each workload type is automatically assigned a default category to ensure consistent classification across the platform.

Categories appear in the Overview dashboard, allowing administrators to filter, group, and monitor workloads based on their function. Administrators can modify the default category mapping for a workload type using the NVIDIA Run:ai API.

## Default Category Mapping

NVIDIA Run:ai defines the following default mappings of workload types to categories. To retrieve the default category per workload type, refer to the [List workload types](https://run-ai-docs.nvidia.com/api/2.23/workloads/workload-properties#get-api-v1-workload-types) API.

{% hint style="info" %}
**Note**

* For more information on workload support, see [Introduction to workloads](/self-hosted/2.23/workloads-in-nvidia-run-ai/introduction-to-workloads.md).
* To see the default priority assigned to each of the workload types listed below, refer to [Workload priority control](/self-hosted/2.23/platform-management/runai-scheduler/scheduling/workload-priority-control.md).
  {% endhint %}

<table><thead><tr><th width="194.56640625"></th><th width="334.7421875">Workload Types</th><th>Default Category</th></tr></thead><tbody><tr><td><strong>NVIDIA Run:ai native workloads</strong></td><td>Workspaces, Standard training, Distributed training, Custom inference, Hugging Face inference, NVIDIA NIM inference</td><td>Workspaces = <code>Build</code><br>Training = <code>Train</code><br>Inference = <code>Deploy</code></td></tr><tr><td><strong>NVIDIA</strong></td><td>NIM services, NVIDIA Cloud Functions (NVCF)</td><td><code>Deploy</code></td></tr><tr><td><strong>Kubernetes</strong></td><td>Deployment, StatefulSet, ReplicaSet, Pod, Service, CronJob, Job, JobSet</td><td>Deployment, StatefulSet, ReplicaSet, Pod, Service, CronJob = <code>Deploy</code><br>Job = <code>Build</code><br>JobSet = <code>Train</code></td></tr><tr><td><strong>Kubeflow</strong></td><td>TFJob, PyTorchJob, MPIJob, XGBoostJob, Notebook, ScheduledWorkflow</td><td>TFJob, PyTorchJob, MPIJob, XGBoostJob = <code>Train</code><br>Notebook = <code>Build</code><br>ScheduledWorkflow = <code>Deploy</code></td></tr><tr><td><strong>Ray</strong></td><td>RayService, RayCluster, RayJob</td><td>RayCluster, RayJob = <code>Train</code><br>RayService = <code>Deploy</code></td></tr><tr><td><strong>Tekton</strong></td><td>PipelineRun, TaskRun</td><td>PipelineRun = <code>Deploy</code><br>TaskRun = <code>Build</code></td></tr><tr><td><strong>Additional Frameworks</strong></td><td>SeldonDeployment, AMLJob, Workflow, DevWorkspace, Service, VirtualMachineInstance, KServe</td><td>SeldonDeployment, Workflow, Service, KServe = <code>Deploy</code><br>AMLJob, VirtualMachineInstance = <code>Train</code><br>DevWorkspace = <code>Build</code></td></tr></tbody></table>

## Update the Default Category Mapping

Administrators can change the default category assigned to a workload type by updating the category mapping using the [NVIDIA Run:ai API](https://run-ai-docs.nvidia.com/api/2.23/). To update the category mapping:

1. Retrieve the list of workload types and their IDs using `GET /api/v1/workload-types`.
2. Identify the `workloadTypeId` of the workload type you want to modify.
3. Retrieve the list of available categories and their IDs using `GET /api/v1/workload-categories`.
4. Send a request to update the workload type with the new category using\
   `PUT /api/v1/workload-types/{workloadTypeId}` and include the `categoryId` in the request body.

## Using API

Go to the [Workload properties](https://run-ai-docs.nvidia.com/api/2.23/workloads/workload-properties#get-api-v1-workload-categories) API reference to view the available actions.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://run-ai-docs.nvidia.com/self-hosted/2.23/platform-management/monitor-performance/workload-categories.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
