runai inference distributed describe

describe a distributed inference workload

runai inference distributed describe [WORKLOAD_NAME] [flags]

Examples

# Describe a workload with a default project
runai inference distributed describe <workload-name>

# Describe a workload in a specific project
runai inference distributed describe <workload-name> -p <project-name>

# Describe a workload by UUID
runai inference distributed describe --uuid=<workload-id>

# Describe a workload with specific output format
runai inference distributed describe <workload-name> -o json

# Describe a workload with specific sections
runai inference distributed describe <workload-name> --general --compute --pods --events --networks

# Describe a workload with container details and custom limits
runai inference distributed describe <workload-name> --containers --pod-limit 20 --event-limit 100

Options

Options inherited from parent commands

SEE ALSO

  • runai inference distributed - Runs multiple coordinated inference processes across multiple nodes. Required for models too large to run on a single node.

Last updated