# AI Applications

This guide explains the procedure for managing AI applications.

## AI Applications Table

The AI applications table can be found under **Workload manager** in the NVIDIA Run:ai platform.

The AI applications table provides a list of all the AI applications scheduled on the NVIDIA Run:ai [Scheduler](/self-hosted/2.24/platform-management/runai-scheduler/scheduling/concepts-and-principles.md), and allows you to manage them.

<figure><img src="/files/1hbMfUPRqv16zNunSSKR" alt=""><figcaption></figcaption></figure>

The AI applications table consists of the following columns:

| Column                 | Description                                                                   |
| ---------------------- | ----------------------------------------------------------------------------- |
| AI application         | The name of the AI application                                                |
| Type                   | The name of the Helm chart                                                    |
| Status                 | The different [phases](#ai-application-status) in an AI application lifecycle |
| Project                | The project in which the AI application runs                                  |
| GPU compute request    | Amount of GPU devices requested                                               |
| GPU compute allocation | Amount of GPU devices allocated                                               |
| GPU memory request     | Amount of GPU memory Requested                                                |
| GPU memory allocation  | Amount of GPU memory allocated                                                |
| CPU compute request    | Amount of CPU cores requested                                                 |
| CPU compute allocation | Amount of CPU cores allocated                                                 |
| CPU memory request     | Amount of CPU memory requested                                                |
| CPU memory allocation  | Amount of CPU memory allocated                                                |

### AI Application Status

The AI application status in NVIDIA Run:ai reflects the underlying **Helm release status**.

NVIDIA Run:ai surfaces the Helm chart state as-is and maps it to the AI application lifecycle. For a complete description of Helm release states, see the [Helm ](https://helm.sh/docs/helm/helm_status/)documentation.

### Customizing the Table View

* Filter - Click ADD FILTER, select the column to filter by, and enter the filter values
* Search - Click SEARCH and type the value to search by
* Sort - Click each column header to sort by
* Column selection - Click COLUMNS and select the columns to display in the table
* Download table - Click MORE and then Click Download as CSV. Export to CSV is limited to 20,000 rows.
* Refresh - Click REFRESH to update the table with the latest data
* Show/Hide details - Click to view additional information on the selected row

### Show/Hide Details

Click a row in the AI applications table and then click the SHOW DETAILS button at the upper-right side of the action bar. The details pane appears, presenting detailed breakdown of some Kubernetes resources that belong to it. The details pane displays:

* A list of all AI application components (workloads, services, secrets, PVCs, ConfigMaps, etc.)
* Status indicators (Running, Pending, Failed, etc.) for each workload

## Managing and Monitoring

After the AI application is created, the workloads are added to the [Workloads](/self-hosted/2.24/workloads-in-nvidia-run-ai/workloads.md) table, where they can be managed and monitored.

## Using API

Go to the [AI Applications](https://run-ai-docs.nvidia.com/api/2.24/ai-applications/ai-applications) API reference to view the available actions.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://run-ai-docs.nvidia.com/self-hosted/2.24/ai-applications/ai-applications.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
