Project

General

Profile

Actions

Dispatching containers to cloud VMs » History » Revision 2

« Previous | Revision 2/82 (diff) | Next »
Peter Amstutz, 08/03/2018 02:31 PM


Dispatching containers to cloud VMs

(Draft. In fact, this might not be needed at all. For example, we might dispatch to kubernetes, and find/make a kubernetes auto-scaler, instead.)

Background

This is about dispatching to on-demand cloud nodes like Amazon EC2 instances.

Not to be confused with dispatching to a cloud-based container service like Amazon Elastic Container Service, Azure Batch or Google Kubernetes Engine.

In crunch1, and the early days of crunch2, we made something work with arvados-nodemanager and SLURM.

One of the goals of crunch2 is eliminating all uses of SLURM with the exception of crunch-dispatch-slurm, whose purpose is to dispatch arvados containers to a SLURM cluster that already exists for non-Arvados tasks.

This doc doesn’t describe a sequence of development tasks or a migration plan. It describes the end state: how dispatch will work when all implementation tasks and migrations are complete.

Relevant components

API server (backed by PostgreSQL) is the source of truth about which containers the system should be trying to execute (or cancel) at any given time.

Arvados configuration (currently via file in /etc, in future via consul/etcd/similar) is the source of truth about cloud provider credentials, allowed node types, spending limits/policies, etc.

crunch-dispatch-cloud-node (a new component) arranges for queued containers to run on worker nodes, brings up new worker nodes in order to run the queue faster, and shuts down idle worker nodes.

Overview of crunch-dispatch-cloud-node operation

When first starting up, inspect API server’s container queue and the cloud provider’s list of dispatcher-tagged cloud nodes, and restore internal state accordingly

When API server puts a container in Queued state, lock it, select or create a cloud node to run it on, and start a crunch-run process there to run it

When API server says a container (locked or dispatched by this dispatcher) should be cancelled, ensure the actual container and its crunch-run supervisor get shut down and the relevant node becomes idle

When a crunch-run invocation (dispatched by this dispatcher) exits without updating the container record on the API server -- or can’t run at all -- clean up accordingly

Invariant: every dispatcher-tagged cloud node is either needed by this dispatcher, or should be shut down (so if there are multiple dispatchers, they must use different tags).

TBD

Mechanism for running commands on worker nodes: SSH?

"crunch-dispatch-cloud" (PA)

Node manager generates wishlist based on container queue. Compute nodes run crunch-dispatch-local or similar service, which asks the API server for work and then runs it.

Advantages:

  • Complete control over scheduling decisions / priority

Disadvantages:

  • Additional load on API server (but probably not that much)
  • Need a new scheme for nodes to report their status so that node manager knows if they are busy, idle. Node manager has to be able to put nodes in equivalent of "draining" state to ensure they don't get shut down while doing work. (We can use the "nodes" table for this).
  • Need to be able to detect node failure.

Starting up

  1. Node looks at pending containers to get a "wishlist"
  2. Nodes spin up the way they do now. However, instead of registering with slurm, they start crunch-dispatch-local.
  3. Node ping token should have corresponding API token to be used by dispatcher to talk to API server
  4. C-d-l pings the API server to ask for work, the ping operation puts the node in either "busy" (if work is returned) or "idle"

Running containers

Assumption: Nodes only run one container at once.

  1. Add "I am idle, give me work" API which locks and returns the next container that is appropriate for the node, or marks the node as "idle" if no work is available
  2. Node record records which container it is supposed to be running (can be part of the "Lock" call based on the per-node API token)
  3. C-d-l makes API call to nodes table to say it is "busy"
  4. C-d-l calls crunch-run to run the container
  5. C-d-l must continue to ping that it is "busy" every X seconds
  6. When container finishes, c-d-l pings that it is "idle"

Shutting down

  1. When node manager decides a node is ready for shutdown, it makes an API call on the node record to indicate "draining".
  2. C-d-l pings "I am idle" on a "draining" record. This puts the state in "drained" and c-d-l does not get any new work.
  3. Node manager sees the node is "drained" and can proceed with destroying the cloud node.

Handling failure

  1. If a node enters a failure state and there is a container associated with it, the container should either be unlocked (if container is in locked state) or cancelled (if in running state).
  2. API server should have a background process which looks for nodes that haven't pinged recently puts them into failed state.
  3. Node can also put itself into failed state with an API call.

Updated by Peter Amstutz over 5 years ago · 2 revisions