runai inference describe

describe an inference workload

runai inference describe [WORKLOAD_NAME] [flags]

Examples

# Describe a workload with a default project
runai inference describe <inference-name>

# Describe a workload in a specific project
runai inference describe <inference-name> -p <project_name>

# Describe a workload by UUID
runai inference describe --uuid=<inference_uuid>

# Describe a workload with specific output format
runai inference describe <inference-name> -o json

# Describe a workload with specific sections
runai inference describe <inference-name> --general --compute --pods --events --networks

# Describe a workload with container details and custom limits
runai inference describe <inference-name> --containers --pod-limit 20 --event-limit 100

Options

Options inherited from parent commands

SEE ALSO

Last updated