Project

General

Profile

Container dispatch » History » Revision 25

Revision 24 (Tom Clegg, 06/15/2016 02:10 PM) → Revision 25/26 (Tom Clegg, 06/15/2016 02:34 PM)

h1. Container dispatch 

 {{toc}} 

 h2. Summary 

 A dispatcher uses available compute resources to execute queued containers. 

 Dispatch is meant to be a small simple component rather than a pluggable framework: e.g., "slurm dispatch" can be a small standalone program, rather than a plugin for a big generic dispatch program. 

 h2. Pseudocode 

 * Notice there is a queued container 
 * Decide whether the required resources are available to run the container 
 * Lock the container (this avoids races with other dispatch processes) 
 * Translate the container's runtime constraints and priority to instructions for the lower-level scheduler, if any 
 * Invoke the "crunch2 run" executor 
 * When the priority changes on a container taken by this dispatch process, update the lower-level scheduler accordingly (cancel if priority is zero) 
 * If the lower-level scheduler indicates the container is finished or abandoned, but the Container record is locked by this dispatcher and has state=Running, fail the container 

 h2. Examples 

 slurm batch mode 
 * Use "sinfo" to determine whether it is possible to run the container 
 * Submit a batch job to the queue: "echo crunch-run --job {uuid} | sbatch -N1" 
 * When container priority changes, use scontrol and scancel to propagate changes to slurm 
 * Use strigger to run a cleanup script when a container exits 

 standalone worker 
 * Inspect /proc/meminfo, /proc/cpuinfo, "docker ps", etc. to determine local capacity 
 * Invoke crunch-run as a child process (or perhaps a detached daemon process) 
 * Signal crunch-run to stop if container priority changes to zero 

 

 h2. Security 

 A dispatch process needs to: 
 * List queued containers 
 * List containers locked by its own token 
 * Lock queued containers 
 * Update containers locked by its own token 

 Crunch-run needs to: 
 * Get the inside-container API token (either from the API or from its caller/environment) 
 * Update its own container record 
 * Get the manifest for the ContainerImage collection 
 * Create output/log collections 

 The container itself, and arv-mount if enabled, need to: 
 * Act as the requesting user (if RuntimeConstraints.API is enabled) 

 The security model should assume worker nodes are less trusted than dispatch nodes. For example, there may be cheaper less-trusted worker nodes where less-sensitive containers can be run, and those worker nodes should never be given a token that would let them see any of the more-sensitive containers. In such cases a "per-container dispatch token" is needed: the dispatcher cannot pass its own token to crunch-run, but crunch-run needs some way to update container state (the token passed to the container itself can't do this). 

 h2. Locking 

 At a given moment, there should be (at most) one process responsible for running a given container, and that process should be the only one updating the database record. 

 Certain common scenarios threaten to disrupt this: 
 # Two dispatch processes are running. They both notice a queued container, and they both decide to run it. 
 # A dispatch process decides to run a container, and starts a crunch-run process (e.g., via slurm) but the dispatch service restarts while crunch-run is still running. 
 # A sysadmin or daemon-supervisor mishap results in two concurrent dispatch processes using the same token. This _should_ be preventable but it's still desirable to behave correctly if it happens. 

 The first scenario ("multiple dispatch, different tokens") is addressed by the locked_by_uuid field. 

 In the second scenario ("amnesiac dispatch"): 
 * As long as the original crunch-run is running (or queued in slurm), the new dispatch process should leave it alone. 
 * If the new dispatch process knows somehow (e.g., squeue) that the original crunch-run process has stopped without moving the container record out of Running state, it should clean up the container record accordingly. 
 * If the new dispatch process makes a mistake here, and tries to clean up the container record while crunch-run is still alive, _one of them must lose:_ If the cleanup transaction is successful, all of crunch-run's subsequent transactions must fail. 
 ** If state=Running then cleanup will change state to Cancelled, which itself ensures subsequent transactions will fail. 
 ** If state=Locked then cleanup will change state to Queued, and the new dispatch process might use the same dispatch token to take it off the queue and change state to Locked. An additional locking mechanism is needed here. 

 In the third scenario ("multiple dispatch, same token"): 
 * If both processes acquire the lock by doing an "update" transaction with state=Locked, using the same token, then after a race both will think they succeeded: the loser's update will look like a no-op. 
 * Solution 1: Use an explicit "lock" API. 
 * Solution 2: Use an If-Match HTTP header when updating with intent to acquire a lock. 


 


 h2. Arvados API support 

 Each dispatch process has an Arvados API token that allows it to see queued containers. 
 * No two dispatch processes can run at the same time with the same token. One way to achieve this is to make a user record for each dispatch service. 

 Container APIs relevant to a dispatch program: 
 * List Queued containers (might be a subset of Queued containers) 
 * List containers with state=Locked or state=Running associated with current token 
 ** arvados.v1.containers.current (equivalent to @filters=[["dispatch_auth_uuid","=",current_client_auth.uuid]]@) 
 * Receive event when container is created or modified and state is Queued (it might become runnable) 
 * Change state Queued->Locked 
 * Change state Locked->Queued 
 * Change state Locked->Running 
 * Change state Running->Complete 
 * Receive event when priority changes 
 * Receive event when state changes to Complete 
 * Retrieve an API token to pass into the container and its arv-mount process (via crunch-run) 
 ** Token is automatically created/assigned when container state changes to Locked 
 ** Token is automatically expired/destroyed when container state changes away from Running 
 ** arvados.v1.containers.container_auth(uuid=container.uuid) → returns an api_client_authorization record 
 * Create events/logs 
 ** Decided to run this container 
 ** Decided not to run this container (e.g., no node with those resources) 
 ** Lock failed 
 ** Dispatched to crunch-run 
 ** Cleaned up crashed crunch-run (lower-level scheduler indicates the job finished, but crunch-run didn't leave the container in a final state) 
 ** Cleaned up abandoned container (container belongs to this process, but dispatch and lower-level scheduler don't know about it) 

 h2. Non-responsibilities 

 Dispatch doesn't retry failed containers. If something needs to be reattempted, a new container will appear in the queue. 

 Dispatch doesn't fail a container that it can't run. It doesn't know whether other dispatchers will be able to run it. 

 h2. Additional notes 

 (see also #6429 and #6518 and #8028) 

 Using websockets to listen for container events (new containers added, priority changes) will benefit from some Go SDK support. 

 h2. Cloud Container Services 

 Cloud providers now offer container execution services.    However, rather than being just an API to run containers (similar to Crunch) these take the form of preconfigured clusters set up with a container orchestration system. 

 AWS offers Elastic Container Service.    It appears that the leader runs on AWS infrastructure (?) and you spin up worker VMs which run the ECS Agent: https://github.com/aws/amazon-ecs-agent 

 Google Container Engine provides a preconfigured Kubernetes cluster.    https://cloud.google.com/container-engine/docs/clusters/operations 

 Azure provides a preconfigured Mesos or Docker Swarm cluster.    https://azure.microsoft.com/en-us/services/container-service/?WT.mc_id=azurebg_email_Trans_1083_Tier2_Release_MOSP