# Integrations

## Integration Support

Support for third-party integrations varies. When noted below, the integration is supported out of the box with NVIDIA Run:ai. For other integrations, our Customer Success team has prior experience assisting customers with setup. In many cases, the NVIDIA Enterprise Support Portal may include additional reference documentation provided on an as-is basis.

<table><thead><tr><th width="110.7734375">Tool</th><th width="127.54296875">Category</th><th width="118.08203125">NVIDIA Run:ai support details</th><th width="392.63671875">Additional Information</th></tr></thead><tbody><tr><td>Apache Airflow</td><td>Orchestration</td><td>Community Support</td><td>It is possible to schedule Airflow workflows with the NVIDIA Run:ai Scheduler. Sample code: <a href="https://enterprise-support.nvidia.com/s/article/How-to-integrate-Run-ai-with-Apache-Airflow">How to integrate NVIDIA Run:ai with Apache Airflow</a>.</td></tr><tr><td>Argo workflows</td><td>Orchestration</td><td>Community Support</td><td>It is possible to schedule Argo workflows with the NVIDIA Run:ai Scheduler. Sample code: <a href="https://enterprise-support.nvidia.com/s/article/How-to-integrate-Run-ai-with-Argo-Workflows">How to integrate NVIDIA Run:ai with Argo Workflows</a>.</td></tr><tr><td>ClearML</td><td>Experiment tracking</td><td>Community Support</td><td>It is possible to schedule ClearML workloads with the NVIDIA Run:ai Scheduler.</td></tr><tr><td>Docker Registry</td><td>Repositories</td><td>Supported</td><td>NVIDIA Run:ai allows using a docker registry as a <a href="../../workloads-in-nvidia-run-ai/assets/credentials">Credential</a> asset</td></tr><tr><td>GitHub</td><td>Storage</td><td>Supported</td><td>NVIDIA Run:ai communicates with GitHub by defining it as a <a href="../../workloads-in-nvidia-run-ai/assets/datasources">data source </a>asset</td></tr><tr><td>Hugging Face</td><td>Repositories</td><td>Supported</td><td>NVIDIA Run:ai provides an out of the box integration with <a href="../../workloads-in-nvidia-run-ai/using-inference/hugging-face-inference">Hugging Face</a></td></tr><tr><td>JupyterHub</td><td>Development</td><td>Community Support</td><td>It is possible to submit NVIDIA Run:ai workloads via JupyterHub.</td></tr><tr><td>Jupyter Notebook</td><td>Development</td><td>Supported</td><td>NVIDIA Run:ai provides integrated support with Jupyter Notebooks. See <a href="../../workloads-in-nvidia-run-ai/using-workspaces/quick-starts/jupyter-quickstart">Jupyter Notebook quick start</a> example.</td></tr><tr><td><a href="https://karpenter.sh/">Karpenter</a></td><td>Cost Optimization</td><td>Supported</td><td>NVIDIA Run:ai provides out of the box support for Karpenter to save cloud costs. Integration notes with Karpenter can be found <a href="integrations/karpenter">here</a>.</td></tr><tr><td><a href="https://www.kubeflow.org/docs/components/training/user-guides/mpi/">Kubeflow MPI</a></td><td>Training</td><td>Supported</td><td>NVIDIA Run:ai provides out of the box support for submitting MPI workloads via API, CLI or UI. See <a href="../../workloads-in-nvidia-run-ai/using-training/distributed-training/distributed-training-models">Distributed training</a> for more details.</td></tr><tr><td>Kubeflow notebooks</td><td>Development</td><td>Community Support</td><td>It is possible to launch a Kubeflow notebook with the NVIDIA Run:ai Scheduler. Sample code: <a href="https://enterprise-support.nvidia.com/s/article/How-to-integrate-Run-ai-with-Kubeflow">How to integrate NVIDIA Run:ai with Kubeflow</a>.</td></tr><tr><td>Kubeflow Pipelines</td><td>Orchestration</td><td>Community Support</td><td>It is possible to schedule kubeflow pipelines with the NVIDIA Run:ai Scheduler. Sample code: <a href="https://enterprise-support.nvidia.com/s/article/How-to-integrate-Run-ai-with-Kubeflow">How to integrate NVIDIA Run:ai with Kubeflow</a>.</td></tr><tr><td>MLFlow</td><td>Model Serving</td><td>Community Support</td><td>It is possible to use ML Flow together with the NVIDIA Run:ai Scheduler.</td></tr><tr><td>PyCharm</td><td>Development</td><td>Supported</td><td>Containers created by NVIDIA Run:ai can be accessed via PyCharm.</td></tr><tr><td>PyTorch</td><td>Training</td><td>Supported</td><td>NVIDIA Run:ai provides out of the box support for submitting PyTorch workloads via API, CLI or UI. See <a href="../../workloads-in-nvidia-run-ai/using-training/distributed-training/distributed-training-models">Distributed training</a> for more details.</td></tr><tr><td>Ray</td><td>training, inference, data processing.</td><td>Community Support</td><td>It is possible to schedule Ray jobs with the NVIDIA Run:ai Scheduler. Sample code: <a href="https://enterprise-support.nvidia.com/s/article/How-to-Integrate-Run-ai-with-Ray">How to Integrate NVIDIA Run:ai with Ray</a>.</td></tr><tr><td>SeldonX</td><td>Orchestration</td><td>Community Support</td><td>It is possible to schedule Seldon Core workloads with the NVIDIA Run:ai Scheduler.</td></tr><tr><td>Spark</td><td>Orchestration</td><td>Community Support</td><td>It is possible to schedule Spark workflows with the NVIDIA Run:ai Scheduler.</td></tr><tr><td>S3</td><td>Storage</td><td>Supported</td><td>NVIDIA Run:ai communicates with S3 by defining a <a href="../../workloads-in-nvidia-run-ai/assets/datasources">data source</a> asset</td></tr><tr><td>TensorBoard</td><td>Experiment tracking</td><td>Supported</td><td>NVIDIA Run:ai comes with a preset TensorBoard <a href="../../workloads-in-nvidia-run-ai/assets/environments">Environment</a> asset</td></tr><tr><td>TensorFlow</td><td>Training</td><td>Supported</td><td>NVIDIA Run:ai provides out of the box support for submitting TensorFlow workloads via API, CLI or UI. See <a href="../../workloads-in-nvidia-run-ai/using-training/distributed-training/distributed-training-models">Distributed training</a> for more details.</td></tr><tr><td>Triton</td><td>Orchestration</td><td>Supported</td><td>Usage via docker base image</td></tr><tr><td>VScode</td><td>Development</td><td>Supported</td><td>Containers created by NVIDIA Run:ai can be accessed via Visual Studio Code. You can automatically launch Visual Studio code web from the NVIDIA Run:ai console.</td></tr><tr><td>Weights &#x26; Biases</td><td>Experiment tracking</td><td>Community Support</td><td>It is possible to schedule W&#x26;B workloads with the NVIDIA Run:ai Scheduler. Sample code: <a href="https://enterprise-support.nvidia.com/s/article/How-to-integrate-with-Weights-and-Biases">How to integrate with Weights and Biases</a>.</td></tr><tr><td><a href="https://xgboost.readthedocs.io/en/stable/">XGBoost</a></td><td>Training</td><td>Supported</td><td>NVIDIA Run:ai provides out of the box support for submitting XGBoost via API, CLI or UI. See <a href="../../workloads-in-nvidia-run-ai/using-training/distributed-training/distributed-training-models">Distributed training</a> for more details.</td></tr></tbody></table>

## Kubernetes Workloads Integration

Kubernetes has several built-in resources that encapsulate running *Pods*. These are called [Kubernetes Workloads](https://kubernetes.io/docs/concepts/workloads/) and **should not be confused** with [NVIDIA Run:ai workloads](https://run-ai-docs.nvidia.com/self-hosted/2.20/workloads-in-nvidia-run-ai/workload-types).

Examples of such resources are a *Deployment* that manages a stateless application, or a *Job* that runs tasks to completion.

A NVIDIA Run:ai workload encapsulates all the resources needed to run and creates/deletes them together. Since NVIDIA Run:ai is an **open platform**, it allows the scheduling of **any** Kubernetes Workflow.

For more information, see [Kubernetes Workloads Integration](https://docs.run.ai/latest/developer/other-resources/other-resources/).
