Project

General

Profile

Actions

Dispatching containers to cloud VMs

(Draft)

See also:

Component name / purpose

arvados-dispatch-cloud runs Arvados user containers on generic public cloud infrastructure by automatically creating and destroying VMs of various sizes according to demand, preparing the VMs' runtime environments, and running containers on them.

Overview of operation

The dispatcher waits for containers to appear in the queue, and runs them on appropriately sized cloud VMs. When there are no idle cloud VMs with the desired size, the dispatcher brings up more VMs using the cloud provider's API. The dispatcher also shuts down idle VMs that exceed the configured idle timer -- and sooner if the provider starts refusing to create new VMs.

Interaction with other components

Controller (backed by RailsAPI and PostgreSQL) supplies the container queue: which containers the system should be trying to execute (or cancel) at any given time.

The cloud provider's API supplies a list of VMs that exist (or are being created) at a given time and their network addresses, accepts orders to create new VMs, updates instance tags, and (optionally, depending on the driver) obtains the VMs' SSH server public keys.

The SSH server on each cloud VM allows the dispatcher to authenticate with a private key and execute shell commands as root (either directly or via sudo).

Instance tags

The dispatcher relies on the cloud provider's tagging feature to persist state across server restarts.
  • {"InstanceType": "foo"} indicates that the instance was created with the specs from the instance type named "foo" in the cluster configuration file.
  • {"IdleBehavior": "hold"} indicates that the management API has been used to put the instance in "hold" state.
  • {"InstanceSecret": "ad23b6a8912f2b75d8a5e6887fbcb82f8024daea"} is a random string used to verify the instance's SSH host key.

Provider-specific drivers (Amazon, Azure) determine exactly how these tags are encoded in the cloud API, and can use tags to persist their own internal state as well. For example, a driver might save tags named "Arvados-DispatchCloud-InstanceType" rather than just "InstanceType".

Deployment

Where to install: The arvados-dispatch-cloud process can run anywhere, as long as it has network access to the Arvados controller, the cloud provider's API, and the worker VMs. Each Arvados cluster should run only one arvados-dispatch-cloud process.
  • Future versions will support multiple dispatchers.

Dispatcher's SSH key: The operator must generate an SSH key pair for the dispatcher to use when connecting to cloud VMs. The private key is stored (without a passphrase) in the cluster configuration file. It does not need to be saved in ~/.ssh/.

Cloud VM image: The operator must provide a VM image with an SSH server on a port reachable by the dispatcher (default 22, configurable per cluster). The dispatcher's SSH public key must be listed in /root/.ssh/authorized_keys. The image should also include systemd-cat (part of systemd) and suitable versions of docker and crunch-run. The /var/lock directory must be available for lockfiles with names matching "crunch-run-*.*".
  • It is possible to install docker and crunch-run using a custom boot probe command, but pre-installing is more efficient.
  • Future versions will automatically sync the crunch-run binary from the dispatcher host to each worker node.
  • The Azure driver creates a new admin user account and installs the SSH public key by itself so /root/.ssh/authorized_keys is not needed. The VM image must include sudo.

Cloud provider account: The dispatcher uses cloud provider credentials to create and delete VMs and other cloud resources. An Arvados user can create an arbitrary number of long-running containers, and the dispatcher will try to run all of them. Currently the dispatcher does not enforce any resource limits of its own, so the operator must ensure the cloud provider itself is enforcing a suitable quota.

Migrating from nodemanager/SLURM: When VM images, SSH keys, and configuration files are ready, disable nodemanager and crunch-dispatch-slurm. Install arvados-dispatch-cloud deb/rpm package. Confirm success with systemctl status arvados-dispatch-cloud and journalctl -fu arvados-dispatch-cloud. See Migrating from arvados-node-manager to arvados-dispatch-cloud.

Configuration

Arvados Cluster configuration (currently a file in /etc) supplies cloud provider credentials, allowed node types, spending limits/policies, etc.

    CloudVMs:
      BootProbeCommand: "docker ps -q" 
      SSHPort: 22
      SyncInterval: 1m    # how often to get list of active instances from cloud provider
      TimeoutIdle: 1m     # shutdown if idle longer than this
      TimeoutBooting: 10m # shutdown if exists longer than this without running BootProbeCommand successfully
      TimeoutProbe: 2m    # shutdown if (after booting) communication fails longer than this, even if ctrs are running
      TimeoutShutdown: 1m # shutdown again if node still exists this long after shutdown
      Driver: Amazon
      DriverParameters:   # following configs are driver dependent
        Region: us-east-1
        AccessKeyID: abcdef
        SecretAccessKey: abcdefghijklmnopqrstuvwxyz
        SubnetID: subnet-01234567
        SecurityGroupIDs: sg-01234567
        AdminUsername: ubuntu
        EBSVolumeType: gp2
    Dispatch:
      StaleLockTimeout: 1m     # after restart, time to wait for workers to come up before abandoning locks from previous run
      PollInterval: 1m         # how often to get latest queue from arvados controller
      ProbeInterval: 10s       # how often to probe each instance for current status/vital signs
      MaxProbesPerSecond: 1000 # limit total probe rate for dispatch process (across all instances)
      PrivateKey: |            # SSH key able to log in as root@ worker VMs
        -----BEGIN RSA PRIVATE KEY-----
        MIIEowIBAAKCAQEAqYm4XsQHm8sBSZFwUX5VeW1OkGsfoNzcGPG2nzzYRhNhClYZ
        0ABHhUk82HkaC/8l6d/jpYTf42HrK42nNQ0r0Yzs7qw8yZMQioK4Yk+kFyVLF78E
        GRG4pGAWXFs6pUchs/lm8fo9zcda4R3XeqgI+NO+nEERXmdRJa1FhI+Za3/S/+CV
        mg+6O00wZz2+vKmDPptGN4MCKmQOCKsMJts7wSZGyVcTtdNv7jjfr6yPAIOIL8X7
        LtarBCFaK/pD7uWll/Uj7h7D8K48nIZUrvBJJjXL8Sm4LxCNoz3Z83k8J5ZzuDRD
        gRiQe/C085mhO6VL+2fypDLwcKt1tOL8fI81MwIDAQABAoIBACR3tEnmHsDbNOav
        Oxq8cwRQh9K2yDHg8BMJgz/TZa4FIx2HEbxVIw0/iLADtJ+Z/XzGJQCIiWQuvtg6
        exoFQESt7JUWRWkSkj9JCQJUoTY9Vl7APtBpqG7rIEQzd3TvzQcagZNRQZQO6rR7
        p8sBdBSZ72lK8cJ9tM3G7Kor/VNK7KgRZFNhEWnmvEa3qMd4hzDcQ4faOn7C9NZK
        dwJAuJVVfwOLlOORYcyEkvksLaDOK2DsB/p0AaCpfSmThRbBKN5fPXYaKgUdfp3w
        70Hpp27WWymb1cgjyqSH3DY+V/kvid+5QxgxCBRq865jPLn3FFT9bWEVS/0wvJRj
        iMIRrjECgYEA4Ffv9rBJXqVXonNQbbstd2PaprJDXMUy9/UmfHL6pkq1xdBeuM7v
        yf2ocXheA8AahHtIOhtgKqwv/aRhVK0ErYtiSvIk+tXG+dAtj/1ZAKbKiFyxjkZV
        X72BH7cTlR6As5SRRfWM/HaBGEgED391gKsI5PyMdqWWdczT5KfxAksCgYEAwXYE
        ewPmV1GaR5fbh2RupoPnUJPMj36gJCnwls7sGaXDQIpdlq56zfKgrLocGXGgj+8f
        QH7FHTJQO15YCYebtsXWwB3++iG43gVlJlecPAydsap2CCshqNWC5JU5pan0QzsP
        exzNzWqfUPSbTkR2SRaN+MenZo2Y/WqScOAth7kCgYBgVoLujW9EXH5QfXJpXLq+
        jTvE38I7oVcs0bJwOLPYGzcJtlwmwn6IYAwohgbhV2pLv+EZSs42JPEK278MLKxY
        lgVkp60npgunFTWroqDIvdc1TZDVxvA8h9VeODEJlSqxczgbMcIUXBM9yRctTI+5
        7DiKlMUA4kTFW2sWwuOlFwKBgGXvrYS0FVbFJKm8lmvMu5D5x5RpjEu/yNnFT4Pn
        G/iXoz4Kqi2PWh3STl804UF24cd1k94D7hDoReZCW9kJnz67F+C67XMW+bXi2d1O
        JIBvlVfcHb1IHMA9YG7ZQjrMRmx2Xj3ce4RVPgUGHh8ra7gvLjd72/Tpf0doNClN
        ti/hAoGBAMW5D3LhU05LXWmOqpeT4VDgqk4MrTBcstVe7KdVjwzHrVHCAmI927vI
        pjpphWzpC9m3x4OsTNf8m+g6H7f3IiQS0aiFNtduXYlcuT5FHS2fSATTzg5PBon9
        1E6BudOve+WyFyBs7hFWAqWFBdWujAl4Qk5Ek09U2ilFEPE7RTgJ
        -----END RSA PRIVATE KEY-----
    InstanceTypes:
    - Name: m4.large
      VCPUs: 2
      RAM: 7782000000
      Scratch: 32000000000
      IncludedScratch: 32000000000
      Price: 0.1
    - Name: m4.large.spot
      Preemptible: true
      VCPUs: 2
      RAM: 7782000000
      Scratch: 32000000000
      IncludedScratch: 32000000000
      Price: 0.1
    - Name: m4.xlarge
      VCPUs: 4
      RAM: 15564000000
      Scratch: 80000000000
      IncludedScratch: 80000000000
      Price: 0.2
    - Name: m4.xlarge.spot
      Preemptible: true
      VCPUs: 4
      RAM: 15564000000
      Scratch: 80000000000
      IncludedScratch: 80000000000
      Price: 0.2
    - Name: m4.2xlarge
      VCPUs: 8
      RAM: 31129000000
      Scratch: 160000000000
      IncludedScratch: 160000000000
      Price: 0.4
    - Name: m4.2xlarge.spot
      Preemptible: true
      VCPUs: 8
      RAM: 31129000000
      Scratch: 160000000000
      IncludedScratch: 160000000000
      Price: 0.4

Management API

APIs for monitoring/diagnostics/control are available via HTTP on a configurable address/port. Request headers must include "Authorization: Bearer {management token}".

Responses are JSON-encoded and resemble other Arvados APIs:

{
  "items": [
    {
      "name": "...",
      ...
    },
    ...
  ]
}

GET /arvados/v1/dispatch/containers lists queued/locked/running containers. Each returned item includes:
  • container UUID
  • container state (Queued/Locked/Running/Complete/Cancelled)
  • desired instance type
  • time appeared in queue
  • time started (if started)
  • if you're switching from slurm, this is roughly equivalent to squeue
POST /arvados/v1/dispatch/containers/kill?container_uuid=X terminates a container immediately.
  • a single attempt is made to send SIGTERM to the container's supervisor (crunch-run) process
  • container state/priority fields are not affected
  • assuming SIGTERM works, the container record will end up with state "Cancelled"
  • if you're switching from slurm, this is roughly equivalent to scancel
GET /arvados/v1/dispatch/instances lists cloud VMs. Each returned item includes:
  • provider's instance ID
  • hourly price (from configuration file)
  • instance type (from configuration file)
  • instance type (from provider's menu)
  • UUID of the current / most recent container attempted (if known)
  • time last container finished (or boot time, if nothing run yet)
  • if you're switching from slurm, this is roughly equivalent to sinfo
POST /arvados/v1/dispatch/instances/hold?instance_id=X puts an instance in "hold" state.
  • if the instance is currently running a container, it is allowed to continue
  • no further containers will be scheduled on the instance
  • the instance will not be shut down automatically
POST /arvados/v1/dispatch/instances/drain?instance_id=X puts an instance in "drain" state.
  • if the instance is currently running a container, it is allowed to continue
  • no further containers will be scheduled on the instance
  • the instance will be shut down automatically when all containers finish
POST /arvados/v1/dispatch/instances/run?instance_id=X puts an instance in the default "run" state.
  • if the instance is currently running a container, it is allowed to continue
  • more containers will be scheduled on the instance when it becomes available
  • the instance will be shut down automatically when it exceeds the configured idle timeout
POST /arvados/v1/dispatch/instances/kill?instance_id=X shuts down an instance immediately.
  • the instance is terminated immediately via cloud API
  • SIGTERM is sent to the container if one is running, but no effort is made to give it time to end gracefully before terminating the instance
POST /arvados/v1/dispatch/loglevel?level=debug sets the logging threshold to "debug" or "info".
  • .../loglevel?level=debug enables debug logs
  • .../loglevel?level=info disables debug logs

† not yet implemented

Management CLI

Sub-command for arvados-server:

arvados-server dispatch

Provide a short form of the binary by renaming (or symlinking) arvados-server to ad, which will only provide access to the "dispatch" subcommands when invoked that way.

The subcommands can be abbreviated to the shortest form that is distinguishable from other subcommands.

Some commands apply to environments with arvados-dispatch-cloud or crunch-dispatch-slurm, and some only apply when arvados-dispatch-cloud is running.

The host that runs the ad binary must have access to a config.yml that lists at a minimum: the endpoint for the dispatcher and the management token.

All commands support a -o flag to specify the type of output. The default is "table", which is fit for human consumption at the cli. The alternative is "json" which is suitable for machine consumption.

Manage containers (arvados-dispatch-cloud and crunch-dispatch-slurm):

# list containers (default state is 'Queued,Locked,Running')
# possible states: Queued, Locked, Running, Complete, Cancelled
# multiple states may be provided, separated with a comma
$ ad containers list -s <state>
$ ad c l

# terminate a container
$ ad container terminate <uuid>
$ ad c t <uuid>

† Inspect and manipulate loglevel of the running dispatcher (arvados-dispatch-cloud and crunch-dispatch-slurm):

# get arvados-dispatch loglevel
$ ad loglevel
$ ad l

# set arvados-dispatch loglevel
$ ad loglevel -set <debug|info>
$ ad l -set <debug|info>

Manage instances (arvados-dispatch-cloud only):

# list instances
$ ad instances list
$ ad i l

# put instance in 'hold' state
$ ad instance hold <instance_id>
$ ad i h <instance_id>

# return instance to 'run' state
$ ad instance run <instance_id>
$ ad i r <instance_id>

# terminate instance immediately
$ ad instance terminate <instance_id>
$ ad i t <instance_id>

# ssh to instance
$ ad instance ssh <instance_id>
$ ad i s <instance_id>

† not yet implemented

Metrics

Metrics are available via HTTP on a configurable address/port (conventionally :9006). Request headers must include "Authorization: Bearer {management token}".

Metrics include:
  • (gauge) number of existing VMs
  • (gauge) total hourly price of all existing VMs
  • (gauge) total VCPUs and memory in all existing VMs
  • (gauge) total VCPUs and memory allocated to containers
  • (gauge) number of containers running
  • (gauge) number of containers allocated to VMs but not started yet (because VMs are pending/booting)
  • (gauge) number of containers not allocated to VMs (because provider quota is reached)
  • (gauge) total hourly price of VMs, partitioned by allocation state (booting, running, idle, shutdown)
  • (summary) time elapsed between VM creation and first successful SSH connection to that VM
  • (summary) time elapsed between first successful SSH connection on a VM and ready to run a container on that VM
  • (summary) time elapsed between first shutdown attempt on a VM and its disappearance from the provider listing
  • (summary) wait times (between seeing a container in the queue or requeueing, and starting its crunch-run process on a worker) across previous starts
  • (gauge) longest wait time of any unstarted container
  • †(counter) cumulative instance time and cost, partitioned by allocation state and node type
  • (counter) VMs that have either become ready or reached boot timeout, partitioned by success/timeout

† not yet implemented

Logs

For purposes of troubleshooting, a JSON-formatted log entry is printed on stderr when...

.. if loglevel ≥ ... ..including timestamp and...
a new instance is created/ordered info instance type name
an instance appears on the provider's list of instances info instance ID
an instance's boot probe succeeds info instance ID
an instance is shut down after boot timeout warn instance ID, †stdout/stderr/error from last boot probe attempt
an instance shutdown is requested info instance ID
an instance disappears from the provider's list of instances info instance ID and previous state (booting/idle/shutdown)
a cloud provider API or driver error occurs error provider/driver's error message
a new container appears in the Arvados queue info container UUID, desired instance type name
a container is locked by the dispatcher debug container UUID
a crunch-run process is started on an instance info container UUID, instance ID, crunch-run PID
a crunch-run process fails to start on an instance info container UUID, instance ID, stdout/stderr/exitcode
a crunch-run process ends info container UUID, instance ID
an active container's state changes to Complete or Cancelled info container UUID, new state
an active container is requeued after being locked info container UUID
an Arvados API error occurs warn error message

† not yet implemented

Example log entries from test suite (note test suite uses text formatting, production logging uses JSON formatting):

INFO[0000] creating new instance                         ContainerUUID=zzzzz-dz642-000000000000160 InstanceType=type8
INFO[0000] instance appeared in cloud                    IdleBehavior=run Instance=stub-providertype8-6ec34c367674cb74 InstanceType=type8 State=booting
INFO[0000] boot probe succeeded                          Command=true Instance=stub-providertype8-6ec34c367674cb74 InstanceType=type8 stderr= stdout=
INFO[0000] instance booted; will try probeRunning        Instance=stub-providertype8-6ec34c367674cb74 InstanceType=type8 ProbeStart="2019-02-05 15:49:49.183431341 -0500 EST m=+0.126074285" 
INFO[0000] probes succeeded, instance is in service      Instance=stub-providertype8-6ec34c367674cb74 InstanceType=type8 ProbeStart="2019-02-05 15:49:49.183431341 -0500 EST m=+0.126074285" RunningContainers=0 State=idle
INFO[0000] crunch-run process started                    ContainerUUID=zzzzz-dz642-000000000000160 Instance=stub-providertype8-6ec34c367674cb74 InstanceType=type8 Priority=20
INFO[0000] container finished                            ContainerUUID=zzzzz-dz642-000000000000160 State=Complete
...
INFO[0002] shutdown idle worker                          Age=151.615512ms IdleBehavior=run Instance=stub-providertype8-6ec34c367674cb74 InstanceType=type8 State=idle
INFO[0002] instance disappeared in cloud                 Instance=stub-providertype8-6ec34c367674cb74 WorkerState=shutdown

If the dispatcher starts with a non-empty ARVADOS_DEBUG environment variable, it also prints more detailed logs about other internal state changes, using level=debug.

Internal details

Worker lifecycle


  ┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
  │                                                                                                                                                    │
  │                  create() returns ID                                                                                                               │         want=drain
  │    ┌───────────────────────────────────────────────────────────────────────────┐                                      ┌────────────────────────────┼─────────────────────────────────────────┐
  │    │                                                                           ∨                                      │                            │                                         ∨
  │  ┌─────────────┐  appears in cloud list   ┌─────────┐  create() returns ID   ┌─────────┐  boot+run probes succeed   ┌──────┐  container starts   ┌─────────┐  container ends, want=drain   ┌──────────┐  instance disappears from cloud   ┌──────┐
  │  │ Nonexistent │ ───────────────────────> │ Unknown │ ─────────────────────> │ Booting │ ─────────────────────────> │      │ ──────────────────> │ Running │ ────────────────────────────> │          │ ────────────────────────────────> │ Gone │
  │  └─────────────┘                          └─────────┘                        └─────────┘                            │      │                     └─────────┘                               │          │                                   └──────┘
  │                                             │                                                                       │      │                                 idle timeout                  │          │
  │                                             │                                                                       │ Idle │ ────────────────────────────────────────────────────────────> │ Shutdown │
  │                                             │                                                                       │      │                                                               │          │
  │                                             │                                                                       │      │                                 probe timeout                 │          │
  │                                             │                                                                       │      │ ────────────────────────────────────────────────────────────> │          │
  │                                             │                                                                       └──────┘                                                               └──────────┘
  │                                             │                                                                         ∧      boot timeout                                                    ∧
  │                                             └─────────────────────────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────┘
  │                                                                                                                       │
  │   container ends                                                                                                      │
  └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

Scheduling policy

The container priority field determines the order in which resources are allocated.
  • If container C1 has priority P1,
  • ...and C2 has higher priority P2,
  • ...and there is no pending/booting/idle VM suitable for running C2,
  • ...then C1 will not be started.
However, containers that run on different VM types don't necessarily start in priority order.
  • If container C1 has priority P1,
  • ...and C2 has higher priority P2,
  • ...and there is no idle VM suitable for running C2,
  • ...and there is a pending/booting VM that will be suitable for running C2 when it comes up,
  • ...and there is an idle VM suitable for running C1,
  • ...then C1 will start before C2.

Special cases / synchronizing state

When first starting up, dispatcher inspects API server’s container queue and the cloud provider’s list of dispatcher-tagged cloud nodes, and restores internal state accordingly.

At startup, some containers might have state=Locked. The dispatcher can't be sure these have no corresponding crunch-run process anywhere until it establishes communication with all running instances. To avoid breaking priority order by guessing wrong, the dispatcher avoids scheduling any new containers until all such "stale-locked" containers are matched up with crunch-run processes on existing VMs (typically preparing a docker image) or all of the existing VMs have been probed successfully (meaning the locked containers aren't running anywhere and need to be rescheduled).

At startup, some instances might still be running containers that were started by a prior invocation, even though the (new) boot probe command fails. Such instances are left alive at least until the containers finish. After that, the usual rules apply: if boot probe succeeds before boot timeout, start scheduling containers; otherwise, shut down. This allows the operator to configure a new image along with a new boot probe command that only works on the new image, without disrupting users' work.

When a user cancels a container request with state=Locked or Running, the container priority changes to 0. On its next poll, the dispatcher notices this and kills any corresponding crunch-run processes (or, if there is no such process, just unlocks the container).

When a crunch-run process ends without finalizing its container's state, the dispatcher notices this and sets state to Cancelled.

Probes

Sometimes (on the happy path) the dispatcher knows the state of each worker, whether it's idle, and which container it's running. In general, it's necessary to probe the worker node itself.

Probe:
  • Check whether the SSH connection is alive; reopen if needed.
  • Run the configured "ready?" command (e.g., "grep /encrypted-tmp /etc/mtab"); if this fails, conclude the node is still booting.
  • Run "crunch-run --list" to get a list of crunch-run supervisors (pid + container UUID)

Detecting dead/lame nodes

If a node has been up for N seconds without a successful probe, it is shut down, even if it was running a container last time it was contacted successfully.

Future plans / features

Per-instance-type VM images: It can be useful to run differently configured/tuned kernels/systems on different instance types, use different ops/monitoring systems on preemptible instances, etc. In addition to a system-wide default, each instance type could optionally specify an image.

Selectable VM images: When upgrading a production system, it can be useful to run a few trial containers on a new VM image before making it the default.

Add support for Google Cloud.

Updated by Ward Vandewege almost 4 years ago · 82 revisions