Project

General

Profile

Actions

Fixing cloud scheduling

Our current approach to scheduling containers on the cloud using SLURM has a number of problems:

  • Head-of-line problem: with a single queue, slurm will only schedule the job at the top of the queue, if it cannot be scheduled, every other job has to wait. This results in wasteful idle nodes and reduces throughput.
  • Queue ordering doesn't reflect our desired priority order without a lot of hacking around with "niceness"
  • Slurm queue forgets dynamic configuration, requires constant maintenance processes to reset slurm dynamic configuration

Things that slurm currently provides:

  • allocating containers to specific nodes
  • reporting idle/busy/failed/down state, and out of contact

crunch-dispatch-cloud

See https://dev.arvados.org/projects/arvados/wiki/Dispatching_containers_to_cloud_VMs#crunch-dispatch-cloud-PA

Other options

Kubernetes

Submit containers to a Kubernetes cluster. Kubernetes handles cluster scaling and scheduling.

Advantages:

  • Get rid of node manager
  • Desirable as part of overall plan to be able to run Arvados on Kubernetes

Disadvantages:

  • Running crunch-run inside a container requires docker-in-docker (privileged container) or access to the Docker socket.

Cloud provider scheduling APIs

Use cloud provider scheduling APIs such as Azure Batch, AWS Batch, Google pipelines API to perform cluster scaling and scheduling.

Would be implemented as custom Arvados dispatcher services: crunch-dispatch-azure, crunch-dispatch-aws, crunch-dispatch-google.

Advantages:

  • Get rid of Node Manager

Disadvantages:

  • Has to be implemented per cloud provider.
  • May be hard to customize behavior, such as job priority.

Use slurm better

Most of our slurm problems are self-inflicted. We have a single partition and single queue with heterogeneous, dynamically configured nodes. We would have fewer problems if we adopted a strategy whereby we define configure slurm ranges "compute-small-[0-255]", "compute-medium-[0-255]", "compute-large-[0-255]" with appropriate specs. Define a partition for each size range, so that a job waiting for one node size does not hold up jobs that want a different node size.

Advantages:

  • Least overall change compared to current architecture

Disadvantages:

  • Requires coordinated change to API server, node manager, crunch-dispatch-slurm, cluster configuration
  • Ops seems to think that defining (sizes * max nodes) hostnames might be a problem?
  • Can't adjust node configurations without restarting the whole cluster

Updated by Peter Amstutz over 6 years ago · 12 revisions