Project

General

Profile

Actions

Fixing cloud scheduling » History » Revision 1

Revision 1/12 | Next »
Peter Amstutz, 07/25/2018 03:33 PM


Fixing cloud scheduling

Our current approach to scheduling containers on the cloud using SLURM has a number of problems:

  • Head-of-line problem: with a single queue, slurm will only schedule the job at the top of the queue, if it cannot be scheduled, every other job has to wait. This results in wasteful idle nodes and reduces throughput.
  • Queue ordering doesn't reflect our desired priority order without a lot of hacking around with "niceness"
  • Slurm queue forgets dynamic configuration, requires constant maintenance processes to reset slurm dynamic configuration

Some solutions:

Use slurm better

Most of our slurm problems are self-inflicted. We have a single partition and single queue with heterogeneous, dynamically configured nodes. We would have fewer problems if we adopted a strategy whereby we define configure slurm ranges "compute-small-[0-255]", "compute-medium-[0-255]", "compute-large-[0-255]" with appropriate specs. Define a partition for each size range, so that a job waiting for one node size does not hold up jobs that want a different node size.

Advantages:

  • Least overall change compared to current architecture

Disadvantages:

  • Requires coordinated change to API server, node manager, crunch-dispatch-slurm, cluster configuration
  • Ops seems to think that defining (sizes * max nodes) hostnames might be a problem?
  • Can't adjust node configurations without restarting the whole cluster

Cloud provider scheduling APIs

Use cloud provider scheduling APIs such as Azure Batch, AWS Batch, Google pipelines API to perform cluster scaling and scheduling.

Would be implemented as custom Arvados dispatcher services: crunch-dispatch-azure, crunch-dispatch-aws, crunch-dispatch-google.

Advantages:

  • Get rid of Node Manager

Disadvantages:

  • Has to be implemented per cloud provider.
  • May be hard to customize behavior, such as job priority.

Kubernetes

Submit containers to a Kubernetes cluster. Kubernetes handles cluster scaling and scheduling.

Advantages:

  • Get rid of node manager
  • Desirable as part of overall plan to be able to run Arvados on Kubernetes

Disadvantages:

  • Running crunch-run inside a container requires docker-in-docker (privileged container) or access to the Docker socket.

Crunch-dispatch-local

Node manager spins up nodes based on container queue. Compute nodes run crunch-dispatch-local or similar service, which asks the API server for work and then runs it. Possibly node manager directly decides which jobs should go onto which nodes.

Advantages:

  • Complete control over scheduling decisions / priority

Disadvantages:

  • Requesting work puts additional load of API server (may not be any worse than live logging, though)
  • Need a new scheme for nodes to report their status so that node manager knows if they are busy, idle. Node manager has to be able to put nodes in equivalent of "draining" state to ensure they don't get shut down while doing work. (We can use the "nodes" table for this).
  • Need to be able to detect node failure.

Updated by Peter Amstutz over 5 years ago · 1 revisions