# Welcome to NVIDIA Run:ai Documentation

NVIDIA Run:ai accelerates AI operations with dynamic orchestration across the AI life cycle, maximizing GPU efficiency, scaling workloads, and integrating seamlessly into hybrid AI infrastructure with zero manual effort.

Find all the product information, step-by-step guides, and references you need.

<table data-view="cards" data-full-width="false"><thead><tr><th></th><th data-hidden data-card-cover data-type="files"></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td><h4>SaaS Documentation</h4><p>For customers using NVIDIA Run:ai’s fully managed, cloud-hosted platform. Always kept up to date with the latest features.</p></td><td><a href="https://4038267327-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F34AHDD4gNSrFPx9T4wNn%2Fuploads%2FIgniA2RQeiKXDKY96rJf%2Fsaas.svg?alt=media&#x26;token=9c7daf63-c713-4d68-8919-100beb1b8261">saas.svg</a></td><td><a href="https://app.gitbook.com/o/8U8fWH7v8Vg8pc99umXT/s/LiY1aIqfxD3a58ufUYOM/">SaaS</a></td></tr><tr><td><h4>Self-hosted Documentation</h4><p>For on-prem and private cloud deployments. Versioned and aligned with your cluster releases.</p></td><td><a href="https://4038267327-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F34AHDD4gNSrFPx9T4wNn%2Fuploads%2FsdFzEtiRU5HSaOeyMZDK%2Fsh.svg?alt=media&#x26;token=f5ce9a7d-89de-475b-a67b-8c8ac7f8bcee">sh.svg</a></td><td><a href="https://app.gitbook.com/s/N20As4prCx0T4ulkEZIr/">NVIDIA Run:ai Self-Hosted Product Documentation</a></td></tr><tr><td><h4>Multi-tenant Documentation</h4><p>For on-prem and private cloud deployments that use a centralized control plane to serve multiple isolated organizations. Versioned and aligned with your cluster releases.</p></td><td><a href="https://4038267327-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F34AHDD4gNSrFPx9T4wNn%2Fuploads%2FSDSXwevarFRh28gTmIt3%2Forganizations_resources%20(1).svg?alt=media&#x26;token=b759c616-329e-4aa6-96ee-c785daf2020c">organizations_resources (1).svg</a></td><td><a href="https://app.gitbook.com/s/YNrUfkNYMRv6Napjtfw1/">NVIDIA Run:ai Multi-Tenant Product Documentation</a></td></tr></tbody></table>

## Features

<table data-view="cards"><thead><tr><th></th></tr></thead><tbody><tr><td><h4>AI-Native Workload Orchestration  </h4><p>Purpose-built for AI workloads, NVIDIA Run:ai delivers intelligent orchestration that maximizes compute efficiency and dynamically scales AI training and inference.</p></td></tr><tr><td><h4>Unified AI Infrastructure Management  </h4><p>NVIDIA Run:ai provides a centralized approach to managing AI infrastructure, ensuring optimal workload distribution across hybrid, multi-cloud, and on-premises environments.</p></td></tr><tr><td><h4>Flexible AI Deployment </h4><p>NVIDIA Run:ai supports AI workloads wherever they need to run, whether on prem, in the cloud, or across hybrid environments, providing seamless integration with AI ecosystems.</p></td></tr><tr><td><h4>Open Architecture</h4><p>Built with an API-first approach, NVIDIA Run:ai ensures seamless integration with all major AI frameworks, machine learning tools, and third-party solutions.</p></td></tr></tbody></table>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://run-ai-docs.nvidia.com/welcome-to-nvidia-run-ai-documentation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
