runai inference exec
execute a command in an inference workload
runai inference exec [WORKLOAD_NAME] [flags]Examples
# Execute bash in the workload's main worker
runai inference exec inference-01 --tty --stdin -- /bin/bash
# Execute ls command in the workload's main worker
runai inference exec inference-01 -- ls
# Execute a command in a specific the workload's worker
runai inference exec inference-01 --pod inference-01-worker-1 -- nvidia-smiOptions
-c, --container string Container name for log extraction
-h, --help help for exec
--pod string Workload pod ID for log extraction, default: master (0-0) if exists, else first worker by id
--pod-running-timeout duration Pod check for running state timeout.
-p, --project string Specify the project to which the command applies. By default, commands apply to the default project. To change the default project use ‘runai config project <project name>’
-i, --stdin Pass stdin to the container
-t, --tty Stdin is a TTY
--wait-timeout duration Timeout for waiting for workload to be ready for log streamingOptions inherited from parent commands
SEE ALSO
runai inference - inference management
Last updated