# runai inference distributed

Runs multiple coordinated inference processes across multiple nodes. Required for models too large to run on a single node.

## Options

```
  -h, --help   help for distributed
```

## Options inherited from parent commands

```
      --config-file string   config file name; can be set by environment variable RUNAI_CLI_CONFIG_FILE (default "config.json")
      --config-path string   config path; can be set by environment variable RUNAI_CLI_CONFIG_PATH
  -d, --debug                enable debug mode
  -q, --quiet                enable quiet mode, suppress all output except error messages
      --verbose              enable verbose mode
```

## SEE ALSO

* [runai inference](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai_inference) - inference management
* [runai inference distributed bash](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai-inference-distributed-bash) - open a bash shell in a distributed inference workload
* [runai inference distributed delete](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai-inference-distributed-delete) - delete a distributed inference workload
* [runai inference distributed describe](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai-inference-distributed-describe) - describe a distributed inference workload
* [runai inference distributed exec](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai-inference-distributed-exec) - execute a command in a distributed inference workload
* [runai inference distributed list](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai-inference-distributed-list) - list distributed inference workloads
* [runai inference distributed logs](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai-inference-distributed-logs) - view logs of a distributed inference workload
* [runai inference distributed port-forward](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai-inference-distributed-port-forward) - forward one or more local ports to a distributed inference workload
* [runai inference distributed scale](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai-inference-distributed-scale) - scale a distributed inference workload
* [runai inference distributed submit](https://run-ai-docs.nvidia.com/saas/reference/cli/runai/runai-inference-distributed-submit) - submit a distributed inference workload
