Container dispatch¶
- Table of contents
- Container dispatch
Summary¶
A dispatcher uses available compute resources to execute queued containers.
Dispatch is meant to be a small simple component rather than a pluggable framework: e.g., "slurm dispatch" can be a small standalone program, rather than a plugin for a big generic dispatch program.
Pseudocode¶
- Notice there is a queued container
- Decide whether the required resources are available to run the container
- Lock the container (this avoids races with other dispatch processes)
- Translate the container's runtime constraints and priority to instructions for the lower-level scheduler, if any
- Invoke the "crunch2 run" executor
- When the priority changes on a container taken by this dispatch process, update the lower-level scheduler accordingly (cancel if priority is zero)
- If the lower-level scheduler indicates the container is finished or abandoned, but the Container record is locked by this dispatcher and has state=Running, fail the container
Examples¶
slurm batch mode- Use "sinfo" to determine whether it is possible to run the container
- Submit a batch job to the queue: "echo crunch-run --job {uuid} | sbatch -N1"
- When container priority changes, use scontrol and scancel to propagate changes to slurm
- Use strigger to run a cleanup script when a container exits
- Inspect /proc/meminfo, /proc/cpuinfo, "docker ps", etc. to determine local capacity
- Invoke crunch-run as a child process (or perhaps a detached daemon process)
- Signal crunch-run to stop if container priority changes to zero
Security¶
A dispatch process needs to:- List queued containers
- List containers locked by its own token
- Lock queued containers
- Update containers locked by its own token
- Get the inside-container API token (either from the API or from its caller/environment)
- Update its own container record
- Get the manifest for the ContainerImage collection
- Create output/log collections
- Act as the requesting user (if RuntimeConstraints.API is enabled)
The security model should assume worker nodes are less trusted than dispatch nodes. For example, there may be cheaper less-trusted worker nodes where less-sensitive containers can be run, and those worker nodes should never be given a token that would let them see any of the more-sensitive containers. In such cases a less-powerful token: the dispatcher cannot pass its own token to crunch-run, but crunch-run needs some way to update container state (the token passed to the container itself can't do this). This less-powerful token must be scoped to permit updates to a specific container(s). For example, a new single-container token can be created each time a container changes state from Queued to Locked.
Locking¶
At a given moment, there should be (at most) one process responsible for running a given container, and that process should be the only one updating the database record.
Certain common scenarios threaten to disrupt this:- Two dispatch processes are running. They both notice a queued container, and they both decide to run it.
- A dispatch process decides to run a container, and starts a crunch-run process (e.g., via slurm) but the dispatch service restarts while crunch-run is still running.
- A sysadmin or daemon-supervisor mishap results in two concurrent dispatch processes using the same token. This should be preventable but it's still desirable to behave correctly if it happens.
The first scenario ("multiple dispatch, different tokens") is addressed by the locked_by_uuid field.
In the second scenario ("amnesiac dispatch"):- As long as the original crunch-run is running (or queued in slurm), the new dispatch process should leave it alone.
- If the new dispatch process knows somehow (e.g., squeue) that the original crunch-run process has stopped without moving the container record out of Running state, it should clean up the container record accordingly.
- If the new dispatch process makes a mistake here, and tries to clean up the container record while crunch-run is still alive, one of them must lose: If the cleanup transaction is successful, all of crunch-run's subsequent transactions must fail.
- If state=Running then cleanup will change state to Cancelled, which itself ensures subsequent transactions will fail.
- If state=Locked then cleanup will change state to Queued, and the new dispatch process might use the same dispatch token to take it off the queue and change state to Locked. An additional locking mechanism is needed here.
- If both processes acquire the lock by doing an "update" transaction with state=Locked, using the same token, then after a race both will think they succeeded: the loser's update will look like a no-op.
- Solution 1: Use an explicit "lock" API.
- Solution 2: Use an If-Match HTTP header when updating with intent to acquire a lock.
Arvados API support¶
Each dispatch process has an Arvados API token that allows it to see queued containers.- No two dispatch processes can run at the same time with the same token. One way to achieve this is to make a user record for each dispatch service.
- List Queued containers (might be a subset of Queued containers)
- List containers with state=Locked or state=Running associated with current token
- arvados.v1.containers.current (equivalent to
filters=[["dispatch_auth_uuid","=",current_client_auth.uuid]]
)
- arvados.v1.containers.current (equivalent to
- Receive event when container is created or modified and state is Queued (it might become runnable)
- Change state Queued->Locked
- Change state Locked->Queued
- Change state Locked->Running
- Change state Running->Complete
- Receive event when priority changes
- Receive event when state changes to Complete
- Retrieve an API token to pass into the container and its arv-mount process (via crunch-run)
- Token is automatically created/assigned when container state changes to Locked
- Token is automatically expired/destroyed when container state changes away from Running
- arvados.v1.containers.container_auth(uuid=container.uuid) → returns an api_client_authorization record
- Create events/logs
- Decided to run this container
- Decided not to run this container (e.g., no node with those resources)
- Lock failed
- Dispatched to crunch-run
- Cleaned up crashed crunch-run (lower-level scheduler indicates the job finished, but crunch-run didn't leave the container in a final state)
- Cleaned up abandoned container (container belongs to this process, but dispatch and lower-level scheduler don't know about it)
Non-responsibilities¶
Dispatch doesn't retry failed containers. If something needs to be reattempted, a new container will appear in the queue.
Dispatch doesn't fail a container that it can't run. It doesn't know whether other dispatchers will be able to run it.
Additional notes¶
(see also #6429 and #6518 and #8028)
Using websockets to listen for container events (new containers added, priority changes) will benefit from some Go SDK support.
Cloud Container Services¶
Cloud providers now offer container execution services. However, rather than being just an API to run containers (similar to Crunch) these take the form of preconfigured clusters set up with a container orchestration system.
AWS offers Elastic Container Service. It appears that the leader runs on AWS infrastructure (?) and you spin up worker VMs which run the ECS Agent: https://github.com/aws/amazon-ecs-agent
Google Container Engine provides a preconfigured Kubernetes cluster. https://cloud.google.com/container-engine/docs/clusters/operations
Azure provides a preconfigured Mesos or Docker Swarm cluster. https://azure.microsoft.com/en-us/services/container-service/?WT.mc_id=azurebg_email_Trans_1083_Tier2_Release_MOSP
Updated by Tom Clegg over 8 years ago · 26 revisions