Integrations

The table below summarizes the integration capabilities with various third-party products.

Integration support

Support for integrations varies. Where mentioned below, the integration is supported out of the box with NVIDIA Run:ai. With other integrations, our customer success team has previous experience with integrating with the third party software and many times the community portal will contain additional reference documentation provided on an as-is basis.

The NVIDIA Run:ai community portal is password protected and access is provided to customers and partners.

Tool
Category
NVIDIA Run:ai support details
Additional Information

Triton

Orchestration

Supported

Usage via docker base image

Spark

Orchestration

Community Support

It is possible to schedule Spark workflows with the NVIDIA Run:ai Scheduler. For details, please contact NVIDIA Run:ai customer support. Sample code can be found in the NVIDIA Run:ai customer success community portal: How to Run Spark job with NVIDIA Run:ai.

Kubeflow Pipelines

Orchestration

Community Support

It is possible to schedule kubeflow pipelines with the NVIDIA Run:ai Scheduler. For details, please contact NVIDIA Run:ai customer support. Sample code can be found in the NVIDIA Run:ai customer success community portal: How to integrate NVIDIA Run:ai with Kubeflow.

Apache Airflow

Orchestration

Community Support

It is possible to schedule Airflow workflows with the NVIDIA Run:ai Scheduler. For details, please contact NVIDIA Run:ai customer support. Sample code can be found in the NVIDIA Run:ai customer success community portal: How to integrate NVIDIA Run:ai with Apache Airflow.

Argo workflows

Orchestration

Community Support

It is possible to schedule Argo workflows with the NVIDIA Run:ai Scheduler. For details, please contact NVIDIA Run:ai customer support. Sample code can be found in the NVIDIA Run:ai customer success community portal: How to integrate NVIDIA Run:ai with Argo Workflows.

SeldonX

Orchestration

Community Support

It is possible to schedule Seldon Core workloads with the NVIDIA Run:ai Scheduler. For details, please contact NVIDIA Run:ai customer success. Sample code can be found in the NVIDIA Run:ai customer success community portal: How to integrate NVIDIA Run:ai with Seldon Core.

Jupyter Notebook

Development

Supported

NVIDIA Run:ai provides integrated support with Jupyter Notebooks. See Jupyter Notebook quick start example.

JupyterHub

Development

Community Support

It is possible to submit NVIDIA Run:ai workloads via JupyterHub. For more information please, contact NVIDIA Run:ai customer support.

PyCharm

Development

Supported

Containers created by NVIDIA Run:ai can be accessed via PyCharm.

VScode

Development

Supported

Containers created by NVIDIA Run:ai can be accessed via Visual Studio Code. You can automatically launch Visual Studio code web from the NVIDIA Run:ai console.

Kubeflow notebooks

Development

Community Support

It is possible to launch a Kubeflow notebook with the NVIDIA Run:ai Scheduler. For details, please contact NVIDIA Run:ai customer support Sample code can be found in the NVIDIA Run:ai customer success community portal: How to integrate NVIDIA Run:ai with Kubeflow.

Ray

training, inference, data processing.

Community Support

It is possible to schedule Ray jobs with the NVIDIA Run:ai Scheduler. Sample code can be found in the NVIDIA Run:ai customer success community portal: How to Integrate NVIDIA Run:ai with Ray.

TensorBoard

Experiment tracking

Supported

NVIDIA Run:ai comes with a preset TensorBoard Environment asset

Weights & Biases

Experiment tracking

Community Support

It is possible to schedule W&B workloads with the NVIDIA Run:ai Scheduler. For details, please contact NVIDIA Run:ai customer success.

ClearML

Experiment tracking

Community Support

It is possible to schedule ClearML workloads with the NVIDIA Run:ai Scheduler. For details, please contact NVIDIA Run:ai customer success.

MLFlow

Model Serving

Community Support

It is possible to use ML Flow together with the NVIDIA Run:ai Scheduler. For details, please contact NVIDIA Run:ai customer support. Sample code can be found in the NVIDIA Run:ai customer success community portal: How to integrate NVIDIA Run:ai with MLFlow.

Hugging Face

Repositories

Supported

NVIDIA Run:ai provides an out of the box integration with Hugging Face

Docker Registry

Repositories

Supported

NVIDIA Run:ai allows using a docker registry as a Credential asset

S3

Storage

Supported

NVIDIA Run:ai communicates with S3 by defining a data source asset

GitHub

Storage

Supported

NVIDIA Run:ai communicates with GitHub by defining it as a data source asset

TensorFlow

Training

Supported

NVIDIA Run:ai provides out of the box support for submitting TensorFlow workloads via API, CLI or UI. See Distributed training for more details.

PyTorch

Training

Supported

NVIDIA Run:ai provides out of the box support for submitting PyTorch workloads via API, CLI or UI. See Distributed training for more details.

Training

Supported

NVIDIA Run:ai provides out of the box support for submitting MPI workloads via API, CLI or UI. See Distributed training for more details.

Training

Supported

NVIDIA Run:ai provides out of the box support for submitting XGBoost via API, CLI or UI. See Distributed training for more details.

Cost Optimization

Supported

NVIDIA Run:ai provides out of the box support for Karpenter to save cloud costs. Integration notes with Karpenter can be found here.

Kubernetes workloads integration

Kubernetes has several built-in resources that encapsulate running Pods. These are called Kubernetes Workloads and should not be confused with NVIDIA Run:ai workloads.

Examples of such resources are a Deployment that manages a stateless application, or a Job that runs tasks to completion.

A NVIDIA Run:ai workload encapsulates all the resources needed to run and creates/deletes them together. Since NVIDIA Run:ai is an open platform, it allows the scheduling of any Kubernetes Workflow.

For more information, see Kubernetes Workloads Integration.

Last updated