Integrations
Integration Support
Support for third-party integrations varies. When noted below, the integration is supported out of the box with NVIDIA Run:ai. For other integrations, our Customer Success team has prior experience assisting customers with setup. In many cases, the NVIDIA Enterprise Support Portal may include additional reference documentation provided on an as-is basis.
Apache Airflow
Orchestration
Community Support
It is possible to schedule Airflow workflows with the NVIDIA Run:ai Scheduler. Sample code: How to integrate NVIDIA Run:ai with Apache Airflow.
Argo workflows
Orchestration
Community Support
It is possible to schedule Argo workflows with the NVIDIA Run:ai Scheduler. Sample code: How to integrate NVIDIA Run:ai with Argo Workflows.
ClearML
Experiment tracking
Community Support
It is possible to schedule ClearML workloads with the NVIDIA Run:ai Scheduler.
Docker Registry
Repositories
Supported
NVIDIA Run:ai allows using a docker registry as a Credential asset
Hugging Face
Repositories
Supported
NVIDIA Run:ai provides an out of the box integration with Hugging Face
JupyterHub
Development
Community Support
It is possible to submit NVIDIA Run:ai workloads via JupyterHub.
Jupyter Notebook
Development
Supported
NVIDIA Run:ai provides integrated support with Jupyter Notebooks. See Jupyter Notebook quick start example.
Cost Optimization
Supported
NVIDIA Run:ai provides out of the box support for Karpenter to save cloud costs. Integration notes with Karpenter can be found here.
Training
Supported
NVIDIA Run:ai provides out of the box support for submitting MPI workloads via API, CLI or UI. See Distributed training for more details.
Kubeflow notebooks
Development
Community Support
It is possible to launch a Kubeflow notebook with the NVIDIA Run:ai Scheduler. Sample code: How to integrate NVIDIA Run:ai with Kubeflow.
Kubeflow Pipelines
Orchestration
Community Support
It is possible to schedule kubeflow pipelines with the NVIDIA Run:ai Scheduler. Sample code: How to integrate NVIDIA Run:ai with Kubeflow.
MLFlow
Model Serving
Community Support
It is possible to use ML Flow together with the NVIDIA Run:ai Scheduler.
PyCharm
Development
Supported
Containers created by NVIDIA Run:ai can be accessed via PyCharm.
PyTorch
Training
Supported
NVIDIA Run:ai provides out of the box support for submitting PyTorch workloads via API, CLI or UI. See Distributed training for more details.
Ray
training, inference, data processing.
Community Support
It is possible to schedule Ray jobs with the NVIDIA Run:ai Scheduler. Sample code: How to Integrate NVIDIA Run:ai with Ray.
SeldonX
Orchestration
Community Support
It is possible to schedule Seldon Core workloads with the NVIDIA Run:ai Scheduler.
Spark
Orchestration
Community Support
It is possible to schedule Spark workflows with the NVIDIA Run:ai Scheduler.
TensorBoard
Experiment tracking
Supported
NVIDIA Run:ai comes with a preset TensorBoard Environment asset
TensorFlow
Training
Supported
NVIDIA Run:ai provides out of the box support for submitting TensorFlow workloads via API, CLI or UI. See Distributed training for more details.
Triton
Orchestration
Supported
Usage via docker base image
VScode
Development
Supported
Containers created by NVIDIA Run:ai can be accessed via Visual Studio Code. You can automatically launch Visual Studio code web from the NVIDIA Run:ai console.
Weights & Biases
Experiment tracking
Community Support
It is possible to schedule W&B workloads with the NVIDIA Run:ai Scheduler. Sample code: How to integrate with Weights and Biases.
Training
Supported
NVIDIA Run:ai provides out of the box support for submitting XGBoost via API, CLI or UI. See Distributed training for more details.
Kubernetes Workloads Integration
Kubernetes has several built-in resources that encapsulate running Pods. These are called Kubernetes Workloads and should not be confused with NVIDIA Run:ai workloads.
Examples of such resources are a Deployment that manages a stateless application, or a Job that runs tasks to completion.
A NVIDIA Run:ai workload encapsulates all the resources needed to run and creates/deletes them together. Since NVIDIA Run:ai is an open platform, it allows the scheduling of any Kubernetes Workflow.
For more information, see Kubernetes Workloads Integration.
Last updated