Policy YAML reference
A workload policy is an end-to-end solution for AI managers and administrators to control and simplify how workloads are submitted, setting best practices, enforcing limitations, and standardizing processes for AI projects within their organization.
This article explains the policy YAML fields and the possible rules and defaults that can be set for each field.
Policy YAML fields - reference table
The policy fields are structured in a similar format to the workload API fields. The following tables represent a structured guide designed to help you understand and configure policies in a YAML format. It provides the fields, descriptions, defaults and rules for each workload type.
Click the link to view the value type of each field.
args
When set, contains the arguments sent along with the command. These override the entry point of the image in the created workload
Workspace
Training
command
A command to serve as the entry point of the container running the workspace
Workspace
Training
createHomeDir
Instructs the system to create a temporary home directory for the user within the container. Data stored in this directory is not saved when the container exists. When the runAsUser flag is set to true, this flag defaults to true as well
Workspace
Training
environmentVariables
Set of environmentVariables to populate the container running the workspace
Workspace
Training
image
Specifies the image to use when creating the container running the workload
Workspace
Training
imagePullPolicy
Specifies the pull policy of the image when starting t a container running the created workload. Options are: always, ifNotPresent, or never
Workspace
Training
workingDir
Container’s working directory. If not specified, the container runtime default is used, which might be configured in the container image
Workspace
Training
nodePools
A prioritized list of node pools for the scheduler to run the workspace on. The scheduler always tries to use the first node pool before moving to the next one when the first is not available.
Workspace
Training
annotations
Set of annotations to populate into the container running the workspace
Workspace
Training
terminateAfterPreemtpion
Indicates whether the job should be terminated, by the system, after it has been preempted
Workspace
Training
autoDeletionTimeAfterCompletionSeconds
Specifies the duration after which a finished workload (Completed or Failed) is automatically deleted. If this field is set to zero, the workload becomes eligible to be deleted immediately after it finishes.
Workspace
Training
backoffLimit
Specifies the number of retries before marking a workload as failed
Workspace
Training
cleanPodPolicy
Specifies which pods will be deleted when the workload reaches a terminal state (completed/failed). The policy can be one of the following values:
Running
- Only pods still running when a job completes (for example, parameter servers) will be deleted immediately. Completed pods will not be deleted so that the logs will be preserved. (Default).All
- All (including completed) pods will be deleted immediately when the job finishes.None
- No pods will be deleted when the job completes. It will keep running pods that consume GPU, CPU and memory over time. It is recommended to set to None only for debugging and obtaining logs from running pods.
Distributed
completions
Used with Hyperparameter Optimization. Specifies the number of successful pods the job should reach to be completed. The Job is marked as successful once the specified amount of pods has succeeded.
Workspace
Training
parallelism
Used with Hyperparameters Optimization. Specifies the maximum desired number of pods the workload should run at any given time.
Workspace
Training
exposeUrls
Specifies a set of exported URL (e.g. ingress) from the container running the created workload.
Workspace
Training
largeShmRequest
Specifies a large /dev/shm device to mount into a container running the created workload. SHM is a shared file system mounted on RAM.
Workspace
Training
PodAffinitySchedulingRule
Indicates if we want to use the Pod affinity rule as: the “hard” (required) or the “soft” (preferred) option. This field can be specified only if PodAffinity is set to true.
Workspace
Training
podAffinityTopology
Specifies the Pod Affinity Topology to be used for scheduling the job. This field can be specified only if PodAffinity is set to true.
Workspace
Training
ports
Specifies a set of ports exposed from the container running the created workload. More information in Ports fields below.
Workspace
Training
probes
Specifies the ReadinessProbe to use to determine if the container is ready to accept traffic. More information in Probes fields below
-
Workspace
Training
tolerations
Toleration rules which apply to the pods running the workload. Toleration rules guide (but do not require) the system to which node each pod can be scheduled to or evicted from, based on matching between those rules and the set of taints defined for each Kubernetes node.
Workspace
Training
priorityClass
Priority class of the workload. The values for workspace are build (default) or interactive-preemptible. For training only, use train. Enum: "build", "train", "interactive-preemptible"
Workspace
storage
Contains all the fields related to storage configurations. More information in Storage fields below.
-
Workspace
Training
security
Contains all the fields related to security configurations. More information in Security fields below.
-
Workspace
Training
compute
Contains all the fields related to compute configurations. More information in Compute fields below.
-
Workspace
Training
Ports fields
serviceType
Specifies the default service exposure method for ports. the default shall be sued for ports which do not specify service type. Options are: LoadBalancer, NodePort or ClusterIP. For more information see the External Access to Containers guide.
Workspace
Training
external
The external port which allows a connection to the container port. If not specified, the port is auto-generated by the system.
Workspace
Training
Probes fields
readiness
Specifies the Readiness Probe to use to determine if the container is ready to accept traffic.
-
Workspace
Training
Readiness field details
Description: Specifies the Readiness Probe to use to determine if the container is ready to accept traffic
Supported NVIDIA Run:ai workload types: Workspace, Training
Value type: itemized
Example workload snippet:
initialDelaySeconds
Number of seconds after the container has started before liveness or readiness probes are initiated.
successThreshold
Minimum consecutive successes for the probe to be considered successful after having failed
Security fields
uidGidSource
Indicates the way to determine the user and group ids of the container. The options are:
fromTheImage
- user and group IDs are determined by the docker image that the container runs. This is the default option.custom
- user and group IDs can be specified in the environment asset and/or the workspace creation request.idpToken
- user and group IDs are determined according to the identity provider (idp) access token. This option is intended for internal use of the environment UI form. For more information, see Non-root containers.
Workspace
Training
capabilities
The capabilities field allows adding a set of unix capabilities to the container running the workload. Capabilities are Linux distinct privileges traditionally associated with superuser which can be independently enabled and disabled
Workspace
Training
seccompProfileType
Indicates which kind of seccomp profile is applied to the container. The options are:
RuntimeDefault - the container runtime default profile should be used
Unconfined - no profile should be applied
Workspace
Training
readOnlyRootFilesystem
If true, mounts the container's root filesystem as read-only.
Workspace
Training
runAsUid
Specifies the Unix user id with which the container running the created workload should run.
Workspace
Training
supplementalGroups
Comma separated list of groups that the user running the container belongs to, in addition to the group indicated by runAsGid.
Workspace
Training
allowPrivilegeEscalation
Allows the container running the workload and all launched processes to gain additional privileges after the workload starts
Workspace
Training
Compute fields
cpuCoreRequest
CPU units to allocate for the created workload (0.5, 1, .etc). The workload receives at least this amount of CPU. Note that the workload is not scheduled unless the system can guarantee this amount of CPUs to the workload.
Workspace
Training
cpuCoreLimit
Limitations on the number of CPUs consumed by the workload (0.5, 1, .etc). The system guarantees that this workload is not able to consume more than this amount of CPUs.
Workspace
Training
cpuMemoryRequest
The amount of CPU memory to allocate for this workload (1G, 20M, .etc). The workload receives at least this amount of memory. Note that the workload is not scheduled unless the system can guarantee this amount of memory to the workload
Workspace
Training
cpuMemoryLimit
Limitations on the CPU memory to allocate for this workload (1G, 20M, .etc). The system guarantees that this workload is not be able to consume more than this amount of memory. The workload receives an error when trying to allocate more memory than this limit.
Workspace
Training
largeShmRequest
A large /dev/shm device to mount into a container running the created workload (shm is a shared file system mounted on RAM).
Workspace
Training
gpuRequestType
Sets the unit type for GPU resources requests to either portion, memory or mig profile. Only if gpuDeviceRequest = 1
, the request type can be stated as portion
, memory
or migProfile
.
Workspace
Training
migProfile (Deprecated)
Specifies the memory profile to be used for workload running on NVIDIA Multi-Instance GPU (MIG) technology.
Workspace
Training
gpuPortionRequest
Specifies the fraction of GPU to be allocated to the workload, between 0 and 1. For backward compatibility, it also supports the number of gpuDevices larger than 1, currently provided using the gpuDevices field.
Workspace
Training
gpuDeviceRequest
Specifies the number of GPUs to allocate for the created workload. Only if gpuDeviceRequest = 1
, the gpuRequestType can be defined.
Workspace
Training
gpuPortionLimit
When a fraction of a GPU is requested, the GPU limit specifies the portion limit to allocate to the workload. The range of the value is from 0 to 1.
Workspace
Training
gpuMemoryRequest
Specifies GPU memory to allocate for the created workload. The workload receives this amount of memory. Note that the workload is not scheduled unless the system can guarantee this amount of GPU memory to the workload.
Workspace
Training
gpuMemoryLimit
Specifies a limit on the GPU memory to allocate for this workload. Should be no less than the gpuMemory.
Workspace
Training
extendedResources
Specifies values for extended resources. Extended resources are third-party devices (such as high-performance NICs, FPGAs, or InfiniBand adapters) that you want to allocate to your Job.
Workspace
Training
Storage fields
dataVolume
Set of data volumes to use in the workload. Each data volume is mapped to a file-system mount point within the container running the workload.
Workspace
Training
Maps a folder to a file-system mount point within the container running the workload.
Workspace
Training
Specifies persistent volume claims to mount into a container running the created workload.
Workspace
Training
configMapVolumes
Specifies ConfigMaps to mount as volumes into a container running the created workload.
Workspace
Training
secretVolume
Set of secret volumes to use in the workload. A secret volume maps a secret resource in the cluster to a file-system mount point within the container running the workload.
Workspace
Training
hostPath field details
Description: Maps a folder to a file system mount oint within the container running the workload
Supported NVIDIA Run:ai workload types: Workspace, Training
Value type: itemized
Example workload snippet:
mountPath
The path that the host volume is mounted to when in use. Enum:
"None"
"HostToContainer"
mountPropagation
Share this volume mount with other containers. If set to HostToContainer, this volume mount receives all subsequent mounts that are mounted to this volume or any of its subdirectories. In case of multiple hostPath entries, this field should have the same value for all of them.
Git field details
Description: Details of the git repository and items mapped to it
Supported NVIDIA Run:ai workload types: Workspace, Training
Value type: itemized
Example workload snippet:
repository
URL to a remote git repository. The content of this repository is mapped to the container running the workload
username
If secretName is provided, this field should contain the key, within the provided Kubernetes secret, which holds the value of your git username. Otherwise, this field should specify your git username in plain text (example: myuser).
PVC field details
Description: Specifies persistent volume claims to mount into a container running the created workload
Supported NVIDIA Run:ai workload types: Workspace, Training
Value type: itemized
Example workload snippet:
ephemeral
Use true to set PVC to ephemeral. If set to true, the PVC is deleted when the workspace is stopped.
ReadwriteOnce
Requesting claim that can be mounted in read/write mode to exactly 1 host. If none of the modes are specified, the default is readWriteOnce.
storageClass
Storage class name to associate with the PVC. This parameter may be omitted if there is a single storage class in the system, or you are using the default storage class. Further details at Kubernetes storage classes.
NFS field details
Description: Specifies NFS volume to mount into the container running the workload
Supported NVIDIA Run:ai workload types: Workspace, Training
Value type: itemized
Example workload snippet:
S3 field details
Description: Specifies S3 buckets to mount into the container running the workload
Supported NVIDIA Run:ai workload types: Workspace, Training
Value type: itemized
Example workload snippet:
Value types
Each field has a specific value type. The following value types are supported.
Boolean
A binary value that can be either True or False
canEdit
required
true/false
String
A sequence of characters used to represent text. It can include letters, numbers, symbols, and spaces
canEdit
required
options
abc
Itemized
An ordered collection of items (objects), which can be of different types (all items in the list are of the same type). For further information see the chapter below the table.
canAdd
locked
See below
Integer
An Integer is a whole number without a fractional component.
canEdit
required
min
max
step
defaultFrom
100
Number
Capable of having non-integer values
canEdit
required
min
defaultFrom
10.3
Quantity
Holds a string composed of a number and a unit representing a quantity
canEdit
required
min
max
defaultFrom
5M
Array
Set of values that are treated as one, as opposed to Itemized in which each item can be referenced separately.
canEdit
required
node-a
node-b
node-c
Itemized
Workload fields of type itemized have multiple instances, however in comparison to objects, each can be referenced by a key field. The key field is defined for each field.
Consider the following workload spec:
In this example, extendedResources have two instances, each has two attributes: resource (the key attribute) and quantity.
In policy, the defaults and rules for itemized fields have two sub sections:
Instances: default items to be added to the policy or rules which apply to an instance as a whole.
Attributes: defaults for attributes within an item or rules which apply to attributes within each item.
Consider the following example:
Assume the following workload submission is requested:
The effective policy for the above mentioned workload has the following extendedResources instances:
default/cpu
Policy defaults
5
The default of this instance in the policy defaults section
added/cpu
Submission request
3
The default of the quantity attribute from the attributes section
added/memory
Submission request
5M
Submission request
Note
The default/memory is not populated to the workload, this is because it has been excluded from the workload using exclude: true.
A workload submission request cannot exclude the default/cpu resource, as this key is included in the locked rules under the instances section. {#a-workload-submission-request-cannot-exclude-the-default/cpu-resource,-as-this-key-is-included-in-the-locked-rules-under-the-instances-section.}
Rule types
canAdd
Whether the submission request can add items to an itemized field other than those listed in the policy defaults for this field.
storage:
hostPath:
instances:
canAdd: false
locked
Set of items that the workload is unable to modify or exclude. In this example, a workload policy default is given to HOME and USER, that the submission request cannot modify or exclude from the workload.
storage:
hostPath:
Instances:
locked:
- HOME
- USER
canEdit
Whether the submission request can modify the policy default for this field. In this example, it is assumed that the policy has default for imagePullPolicy. As canEdit is set to false, submission requests are not able to alter this default.
imagePullPolicy:
canEdit: false
required
When set to true, the workload must have a value for this field. The value can be obtained from policy defaults. If no value specified in the policy defaults, a value must be specified for this field in the submission request.
image:
required: true
step
The allowed gap between values for this field. In this example the allowed values are: 1, 3, 5, 7
compute:
cpuCoreRequest:
min: 1
max: 7
Step: 2
Policy Spec Sections
For each field of a specific policy, you can specify both rules and defaults. A policy spec consists of the following sections:
Rules
Defaults
Imposed Assets
Rules
Rules set up constraints on workload policy fields. For example, consider the following policy:
Such a policy restricts the maximum value for gpuDeviceRequests to 8, and the minimal value for runAsUid, provided in the security section to 500.
Defaults
The defaults section is used for providing defaults for various workload fields. For example, consider the following policy:
Assume a submission request with the following values:
Image: ubuntu
runAsUid: 501
The effective workload that runs has the following set of values:
Image
Ubuntu
Submission request
ImagePullPolicy
Always
Policy defaults
security.runAsNonRoot
true
Policy defaults
security.runAsUid
501
Submission request
Note
It is possible to specify a rule for each field, which states if a submission request is allowed to change the policy default for that given field, for example:
If this policy is applied, the submission request above fails, as it attempts to change the value of secuirty.runAsUid from 500 (the policy default) to 501 (the value provided in the submission request), which is forbidden due to canEdit rule set to false for this field.
Imposed assets
Default instances of a storage field can be provided using a datasource containing the details of this storage instance. To add such instances in the policy, specify those asset IDs in the imposedAssets section of the policy.
Assets with references to credential assets (for example: private S3, containing reference to an AccessKey asset) cannot be used as imposedAssets.
Last updated