Project

General

Profile

Feature #14922

Updated by Peter Amstutz over 1 year ago

We have a customer which has limitations on the number of instances they can run, but not on the size of those instances -- this seems to be due to network management policy, the instances are launched in a subnet with a limited number of IP addresses. 

 As a result, allocating larger VMs and running multiple containers per VM, even when there is minimal cost savings, makes sense as a scaling strategy. 

 Given a queue of pending containers, we would like an algorithm that instead of considering a single container at a time, looks ahead in the queue and finds opportunities to boot a larger instance that can accommodate multiple containers. 

 The return on investment will be to optimize supervisor containers (eg arvados-cwl-runner) that are generally scheduled with relatively lightweight resource requirements and spend a lot of time waiting. 

 So, a simplifying assumption to avoid solving the general problem could be to focus on small container optimization and only try to co-schedule containers that (for example) request 1 core and less than 2 GiB of RAM. 

 h3. Old description 

 Run a new container on an already-occupied VM (instead of using an idle VM or creating a new one) when the following conditions apply: 
 * the occupied VM is the same price as the instance type that would normally be chosen for the new container 
 * the occupied VM has enough unallocated RAM, scratch, and VCPUs to accommodate the new container 
 * either all containers on the VM allocate >0 VCPUs, or the instance will still have non-zero unallocated space after adding the new container (this ensures that an N-VCPU container does not share an N-VCPU instance with anything, even 0-VCPU containers) 
 * the occupied VM has IdleBehavior=run (not hold or drain) 

 If multiple occupied VMs satisfy these criteria, choose the one that has the most containers already running, or the most RAM already occupied. This will tend to drain the pool of shared VMs rather than keeping many underutilized VMs alive after a busy period subsides to a less-busy period. 

 Typically, "same price as a dedicated node, but has spare capacity" will only happen with the cheapest instance type, but it might also apply to larger sizes if the menu has big size steps. Either way, this rule avoids the risk of wasting money by scheduling small long-running containers onto big nodes. In future, this rule may be configurable (to accommodate workloads that benefit more by sharing than they lose by underusing nodes). 

 Typically, the smallest node type has 1 VCPU, so this feature is useful only if container requests and containers can either 
 * specify a minimum of zero CPUs, or 
 * specify a fractional number of CPUs. 

 ...so ensure at least one of those is possible. 

Back