Project

General

Profile

Actions

Feature #14922

open

Run multiple containers concurrently on a single cloud VM

Added by Tom Clegg about 5 years ago. Updated 2 months ago.

Status:
New
Priority:
Normal
Assigned To:
-
Category:
Crunch
Target version:
Story points:
5.0

Description

Run a new container on an already-occupied VM (instead of using an idle VM or creating a new one) when the following conditions apply:
  • the occupied VM is the same price as the instance type that would normally be chosen for the new container
  • the occupied VM has enough unallocated RAM, scratch, and VCPUs to accommodate the new container
  • either all containers on the VM allocate >0 VCPUs, or the instance will still have non-zero unallocated space after adding the new container (this ensures that an N-VCPU container does not share an N-VCPU instance with anything, even 0-VCPU containers)
  • the occupied VM has IdleBehavior=run (not hold or drain)

If multiple occupied VMs satisfy these criteria, choose the one that has the most containers already running, or the most RAM already occupied. This will tend to drain the pool of shared VMs rather than keeping many underutilized VMs alive after a busy period subsides to a less-busy period.

Typically, "same price as a dedicated node, but has spare capacity" will only happen with the cheapest instance type, but it might also apply to larger sizes if the menu has big size steps. Either way, this rule avoids the risk of wasting money by scheduling small long-running containers onto big nodes. In future, this rule may be configurable (to accommodate workloads that benefit more by sharing than they lose by underusing nodes).

Typically, the smallest node type has 1 VCPU, so this feature is useful only if container requests and containers can either
  • specify a minimum of zero CPUs, or
  • specify a fractional number of CPUs.

...so ensure at least one of those is possible.

Notes

This should work for the case we need it to because the minimum node size is something like 2 cores / 2-4 GiB and the containers we want to run should (ideally) be requesting less than 1 GiB.

We should have a cluster-wide config knob to turn this off.


Related issues

Related to Arvados - Feature #15370: [arvados-dispatch-cloud] loopback driverResolvedTom Clegg05/17/2022Actions
Related to Arvados - Idea #20473: Automated scalability regression testNewActions
Related to Arvados - Bug #20801: Crunch discountConfiguredRAMPercent math seems surprising, undesirableNewActions
Related to Arvados Epics - Idea #20599: Scaling to 1000s of concurrent containersResolved06/01/202303/31/2024Actions
Actions

Also available in: Atom PDF