# runai inference standard

Runs a single inference process on one node. Suitable for smaller models or simpler inference tasks.

#### Options

```
  -h, --help   help for standard
```

#### Options inherited from parent commands

```
      --config-file string   config file name; can be set by environment variable RUNAI_CLI_CONFIG_FILE (default "config.json")
      --config-path string   config path; can be set by environment variable RUNAI_CLI_CONFIG_PATH
  -d, --debug                enable debug mode
  -q, --quiet                enable quiet mode, suppress all output except error messages
      --verbose              enable verbose mode
```

#### SEE ALSO

* [runai inference](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai_inference) - inference management
* [runai inference standard bash](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-standard-bash) - open a bash shell in an inference workload
* [runai inference standard delete](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-standard-delete) - delete an inference workload
* [runai inference standard describe](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-standard-describe) - describe an inference workload
* [runai inference standard exec](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-standard-exec) - execute a command in an inference workload
* [runai inference standard list](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-standard-list) - list inference workloads
* [runai inference standard logs](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-standard-logs) - view logs of an inference workload
* [runai inference standard port-forward](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-standard-port-forward) - forward one or more local ports to an inference workload
* [runai inference standard submit](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-standard-submit) - submit an inference workload
* [runai inference standard update](https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-standard-update) - update an inference workload
