Bug #13166
closed[node manager] wishlist should consist of top priority containers
Description
Node manager calculates a "wishlist" which is how many nodes of each size are desired.
The current behavior reports the entire queue, the creates nodes starting with the largest size and working its way down to the smallest. The rationale being that small jobs can run on large nodes but not the other way around.
However, #12199 changes the behavior such that containers will be scheduled on specific node sizes. If the queue is much larger than the maximum number of nodes, this could lead to a situation in which the top of the queue consists of small jobs, but only large nodes are available. In this case, it will either schedule jobs out of order (with low-priority large-node jobs jumping the queue) or deadlock.
To fix:
- After getting the contents of squeue, sort in decending order by slurm priority
- Only consider the top (max_nodes - up_nodes) items in the wishlist and discard the remaining
Related issues
Updated by Peter Amstutz over 6 years ago
- Status changed from New to In Progress
Updated by Tom Clegg over 6 years ago
- Related to Idea #12552: When priority is equal, the children of an earlier-submitted workflow should run first added
Updated by Peter Amstutz over 6 years ago
- Related to Bug #12199: Don't schedule jobs on nodes which are too much bigger than requested added
Updated by Peter Amstutz over 6 years ago
- Status changed from In Progress to New
Updated by Tom Morris over 6 years ago
- Assigned To set to Peter Amstutz
- Target version changed from To Be Groomed to 2018-03-28 Sprint
Updated by Lucas Di Pentima over 6 years ago
- Assigned To changed from Peter Amstutz to Lucas Di Pentima
Updated by Lucas Di Pentima over 6 years ago
- Status changed from New to In Progress
Updated by Lucas Di Pentima over 6 years ago
Updates at 93d9c63e1fd4d4c268aa9050406b9a212f3d4a22 - branch 13166-nodemanager-whishlist
Changed wishlist behavior as requested, all tests ran OK on 2dd214985e9727bf0b5fbf11b0e39c3e7d3cc5c4 but they're failing on Jenkins (https://ci.curoverse.com/job/developer-run-tests-services-nodemanager/164/console) but running OK on my VM.
Updated by Peter Amstutz over 6 years ago
Comment is missing the addition of %Q for priority.
# cpus, memory, tempory disk space, reason, job name, feature constraints squeue_out = subprocess.check_output(["squeue", "--state=PENDING", "--noheader", "--format=%c|%m|%d|%r|%j|%f|%Q"])
An in-place sort would be slightly more efficient:
queuelist.sort(key=lambda x: x.get('priority', 1), reverse=True)
In order to test the wishlist ordering, you should have a variation of this test where the "big" node is at the end of the list and check that it only boots small nodes.
def test_node_max_nodes_two_sizes(self): ... self.make_daemon(want_sizes=[small, small, big, small], avail_sizes=avail_sizes, max_nodes=3)
I'm not quite sure what is going on with the integration test. The test framework does have a failure mode where it crashes with "IOError: [Errno 11] Resource temporarily unavailable" I haven't gotten to the bottom of what that happens.
Updated by Peter Amstutz over 6 years ago
I think this fixes the integration tests:
diff --git a/services/nodemanager/tests/integration_test.py b/services/nodemanager/tests/integration_test.py index 48fc3951e..508e62663 100755 --- a/services/nodemanager/tests/integration_test.py +++ b/services/nodemanager/tests/integration_test.py @@ -342,7 +342,6 @@ def main(): (r".*Cloud node (\S+) is now paired with Arvados node (\S+) with hostname (\S+)", node_paired), (r".*ComputeNodeMonitorActor\..*\.([^[]*).*Not eligible for shut down because node state is \('busy', 'open', .*\)", node_busy), (r".*ComputeNodeMonitorActor\..*\.([^[]*).*Suggesting shutdown because node state is \('idle', 'open', .*\)", remaining_jobs), - (r".*ComputeNodeMonitorActor\..*\.([^[]*).*Not eligible for shut down because node state is \('busy', 'open', .*\)", node_busy), (r".*ComputeNodeShutdownActor\..*\.([^[]*).*Shutdown success", node_shutdown), (r".*ComputeNodeShutdownActor\..*\.([^[]*).*Shutdown success", node_shutdown) ], @@ -369,7 +368,6 @@ def main(): (r".*Cloud node (\S+) is now paired with Arvados node (\S+) with hostname (\S+)", node_paired), (r".*ComputeNodeMonitorActor\..*\.([^[]*).*Not eligible for shut down because node state is \('busy', 'open', .*\)", node_busy), (r".*ComputeNodeMonitorActor\..*\.([^[]*).*Suggesting shutdown because node state is \('idle', 'open', .*\)", remaining_jobs), - (r".*ComputeNodeMonitorActor\..*\.([^[]*).*Not eligible for shut down because node state is \('busy', 'open', .*\)", node_busy), (r".*ComputeNodeShutdownActor\..*\.([^[]*).*Shutdown success", node_shutdown), (r".*ComputeNodeShutdownActor\..*\.([^[]*).*Shutdown success", node_shutdown), (r".*sending request", jobs_req),
Updated by Lucas Di Pentima over 6 years ago
Updates at b17321254
Test run: https://ci.curoverse.com/job/developer-run-tests/667/
- Updated integration tests
- Updated squeue call comment & list ordering
- Added test to check wishlist ordering
Updated by Peter Amstutz over 6 years ago
Lucas Di Pentima wrote:
Updates at b17321254
Test run: https://ci.curoverse.com/job/developer-run-tests/667/
- Updated integration tests
- Updated squeue call comment & list ordering
- Added test to check wishlist ordering
This LGTM, thanks.
Updated by Lucas Di Pentima over 6 years ago
- Status changed from In Progress to Resolved
- % Done changed from 0 to 100
Applied in changeset arvados|f1ee0cb2f85c4ea988700da2e3957565b6861ad5.