Project

General

Profile

Actions

Dispatching containers to cloud VMs » History » Revision 4

« Previous | Revision 4/82 (diff) | Next »
Peter Amstutz, 08/04/2018 05:01 PM


Dispatching containers to cloud VMs

(Draft. In fact, this might not be needed at all. For example, we might dispatch to kubernetes, and find/make a kubernetes auto-scaler, instead.)

Background

This is about dispatching to on-demand cloud nodes like Amazon EC2 instances.

Not to be confused with dispatching to a cloud-based container service like Amazon Elastic Container Service, Azure Batch or Google Kubernetes Engine.

In crunch1, and the early days of crunch2, we made something work with arvados-nodemanager and SLURM.

One of the goals of crunch2 is eliminating all uses of SLURM with the exception of crunch-dispatch-slurm, whose purpose is to dispatch arvados containers to a SLURM cluster that already exists for non-Arvados tasks.

This doc doesn’t describe a sequence of development tasks or a migration plan. It describes the end state: how dispatch will work when all implementation tasks and migrations are complete.

Relevant components

API server (backed by PostgreSQL) is the source of truth about which containers the system should be trying to execute (or cancel) at any given time.

Arvados configuration (currently via file in /etc, in future via consul/etcd/similar) is the source of truth about cloud provider credentials, allowed node types, spending limits/policies, etc.

crunch-dispatch-cloud-node (a new component) arranges for queued containers to run on worker nodes, brings up new worker nodes in order to run the queue faster, and shuts down idle worker nodes.

Overview of crunch-dispatch-cloud-node operation

When first starting up, inspect API server’s container queue and the cloud provider’s list of dispatcher-tagged cloud nodes, and restore internal state accordingly

When API server puts a container in Queued state, lock it, select or create a cloud node to run it on, and start a crunch-run process there to run it

When API server says a container (locked or dispatched by this dispatcher) should be cancelled, ensure the actual container and its crunch-run supervisor get shut down and the relevant node becomes idle

When a crunch-run invocation (dispatched by this dispatcher) exits without updating the container record on the API server -- or can’t run at all -- clean up accordingly

Invariant: every dispatcher-tagged cloud node is either needed by this dispatcher, or should be shut down (so if there are multiple dispatchers, they must use different tags).

Mechanisms

Interface between dispatcher and operator

Management status endpoint provides a list of its cloud VMs, each with cloud instance ID, UUID of the current / most recent container it attempted (if known), hourly price, and idle time. This snapshot info is not saved to PostgreSQL.

Interface between dispatcher and cloud provider

"VMProvider" interface has the few cloud instance operations we need (list instances+tags, create instance with tags, update instance tags, destroy instance).

Interface between dispatcher and worker node

Each worker node has a public key in /root/.ssh/authorized_keys. Dispatcher has the corresponding private key.

Dispatcher uses the Go SSH client library to connect to worker nodes.

Probe operation

Sometimes (on the happy path) the dispatcher knows the state of each worker, whether it's idle, and which container it's running. In general, it's necessary to probe the worker node itself.

Probe:
  • Check whether the SSH connection is alive; reopen if needed.
  • Run the configured "ready?" command (e.g., "grep /encrypted-tmp /etc/mtab"); if this fails, conclude the node is still booting.
  • Run "crunch-run --list" to get a list of crunch-run supervisors (pid + container UUID)

Dispatcher, after a successful probe, should tag the cloud node record with the dispatcher's ID and probe timestamp. (In case the tagging API fails, remember the probe time in memory too.)

Dead/lame nodes

If a node has been up for N seconds without a successful probe, despite at least M attempts, shut it down. (M handles the case where the dispatcher restarts during a time when the "update tags" operation isn't effective, e.g., provider is rate-limiting API calls.)

Dead dispatchers

Every cluster should run multiple dispatchers. If one dies and stays down, the other must notice and take over (or shut down) its nodes -- otherwise those worker node will run forever.

Each dispatcher, when inspecting the list of cloud nodes, should try to claim (or, failing that, destroy) any node that belongs to a different dispatcher and hasn't completed a probe for N seconds. (This timeout might be longer than the "lame node" timeout.)

Dispatcher internal flow

Container lifecycle

  • Dispatcher gets list of pending/running containers from API server
  • Each container record either starts a new goroutine or sends the update on a channel to the existing goroutine
  • On start, the container goroutine calls "allocate node" on scheduler with the container uuid, priority, instance type, and container state channel.
  • "Allocate node" blocks until it can return a node object allocated to that container
  • The node must be either idle, or busy running that specific container
  • If the container is "locked" and the node is idle, call "crunch-run" to start the container
  • If the container is "locked" and the node is busy, then do nothing and continue monitoring
  • If the container is "running" and the node is busy, then do nothing and continue monitoring
  • If the container is "running" and the node is idle, cancel the container (this means the original container run has been lost)
  • If the container is "complete" or "cancelled" then tell the scheduler to release the node

Scheduler lifecycle

  • Periodically poll all cloud nodes to get their current state (one of: booting, idle, busy, drain/shutdown) and (free/locked) and what container they are running
  • When an allocation request comes in, add to request list/heap (sorted by priority) and monitor container state channel for changes in priority
  • If it is at the top of the request list, immediately try to schedule. Otherwise, periodically run the scheduler on the request list starting from the top.
    • Check if there is a free, busy node that is already running that container and return that.
    • Check for a free, idle node of the appropriate instance type, if found, lock it and return it
    • If no node is found, start a "create node" goroutine, make a note on the allocation request, and go on to the next request
  • When the scheduler gets notice that a new idle node is available, check the request list and see if there are any pending requests for that instance type, if so, allocate the node

Node lifecycle

  • Generate a random node id
  • Starts a "create node" request, tagged with our node id
  • Send create request to the cloud
  • Periodically poll the cloud node list
  • Wait for the node to appear in the cloud node list
  • Get the IP address
  • Try to establish ssh connection
  • Once connected, wait for "ready" script to indicate the node is ready
  • Once node is ready, send notice of new idle node to the scheduler
  • Node goroutine continues to monitor remote node state, send state updates to scheduler

Updated by Peter Amstutz over 5 years ago · 4 revisions