# runai inference nim

\[Experimental] Runs NVIDIA NIM (NVIDIA Inference Microservices) workloads. Optimized for deploying foundation models.

## Options

```
  -h, --help   help for nim
```

## Options inherited from parent commands

```
      --config-file string   config file name; can be set by environment variable RUNAI_CLI_CONFIG_FILE (default "config.json")
      --config-path string   config path; can be set by environment variable RUNAI_CLI_CONFIG_PATH
  -d, --debug                enable debug mode
  -q, --quiet                enable quiet mode, suppress all output except error messages
      --verbose              enable verbose mode
```

## SEE ALSO

* [runai inference](/self-hosted/reference/cli/runai/runai_inference.md) - inference management
* [runai inference nim bash](/self-hosted/reference/cli/runai/runai-inference-nim-bash.md) - open a bash shell in a nim inference workload
* [runai inference nim delete](/self-hosted/reference/cli/runai/runai-inference-nim-delete.md) - delete a nim inference workload
* [runai inference nim describe](/self-hosted/reference/cli/runai/runai-inference-nim-describe.md) - describe a nim inference workload
* [runai inference nim exec](/self-hosted/reference/cli/runai/runai-inference-nim-exec.md) - execute a command in a nim inference workload
* [runai inference nim list](/self-hosted/reference/cli/runai/runai-inference-nim-list.md) - list nim inference workloads
* [runai inference nim logs](/self-hosted/reference/cli/runai/runai-inference-nim-logs.md) - view logs of a nim inference workload
* [runai inference nim port-forward](/self-hosted/reference/cli/runai/runai-inference-nim-port-forward.md) - forward one or more local ports to a nim inference workload
* [runai inference nim scale](/self-hosted/reference/cli/runai/runai-inference-nim-scale.md) - scale a nim inference workload
* [runai inference nim submit](/self-hosted/reference/cli/runai/runai-inference-nim-submit.md) - submit a nim inference workload
* [runai inference nim update](/self-hosted/reference/cli/runai/runai-inference-nim-update.md) - update a nim inference workload


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://run-ai-docs.nvidia.com/self-hosted/reference/cli/runai/runai-inference-nim.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
