Project

General

Profile

Actions

Feature #5717

closed

[Crunch] Use tasks_this_level to calculate a virtual max_tasks_per_node

Added by Brett Smith about 7 years ago. Updated about 7 years ago.

Status:
Resolved
Priority:
Normal
Assigned To:
Brett Smith
Category:
Crunch
Target version:
Start date:
04/14/2015
Due date:
% Done:

100%

Estimated time:
(Total: 1.00 h)
Story points:
0.5

Description

Following up on #5642 and 6261cf90.

[From IRC. Edited for clarity.]

(10:44:23 AM) Me: What if we stuck to calculating $tasks_this_level before THISROUND, and then used it as a virtual max_tasks_per_node for the entire level, if one isn't explicitly set?
(10:44:53 AM) Me: The memory limits already consider max_tasks_per_node, so this change would make sure each task on the level runs with the same RAM limits.
(10:45:00 AM) tomclegg: brett: ideally the job would tell us ahead of time "I need much ram" or "I don't need many slots" and we wouldn't have to guess. (but the full solution there isn't obvious either)
(10:45:26 AM) Me: Yes, I'm trying to scope this more narrowly than an ideal solution right now. :)
(10:45:58 AM) Me: What I'm proposing would arguably punish you if you queue up a small number of tasks at a level, then many more tasks at that same level… but you shouldn't be doing that anyway.
(10:46:20 AM) tomclegg: brett: right, we could decide before THISROUND to reduce #slots for this level. that'd be safe, and would only have adverse effects for jobs that queue level-N tasks from level-N tasks.
(10:46:33 AM) tomclegg: heh. "shouldn't"
(10:47:26 AM) tomclegg: brett: (I think it's uncommon but I wouldn't go as far as "shouldn't": "do more work, it's ready to do right now" seems pretty reasonable)
(10:50:10 AM) tomclegg: brett: this actually seems pretty close to "ideal" if you think of "only queued 1 task on this level so far" as the closest we have to letting tasks specify resource constraints.
[…]
(10:57:02 AM) tomclegg: anyway, yes, I'm convinced. "virtual max_tasks_per_node for the entire level" sgtm.
(10:57:13 AM) Me: Cool, I'll work on that.


Subtasks 1 (0 open1 closed)

Task #5721: Review 5717-crunch-dynamic-max-tasks-per-node-wipResolvedTom Clegg04/14/2015

Actions
Actions #1

Updated by Tom Clegg about 7 years ago

Comments on 24b0da5

I think it's worth adding a comment near my $tasks_this_level ... [a] explaining that the reason we do this is to increase the per-task memory limit when ntasks<nslots, and [b] acknowledging this artificially reduces concurrency in the case where level L starts with few tasks but then adds more tasks at level L. Renaming $tasks_this_level to $max_slots_this_level might help here too (as named, it can go out of sync with reality, but as $max_slots_this_level it is accurate throughout its scope).

If $tasks_this_level < @node we should raise it to @node. (Same memory benefit, but greater max concurrency.)

Undecided whether it's worth asking for a log message when we detect the "artificially reduces concurrency" situation (something like @jobstep_todo > 0 && $tasks_this_level < @slot ?).

Is this still OK? @slot > @freeslot used to mean "any tasks are running" but now it can be true when no tasks are running, so I wonder if this can cause us to get stuck in THISROUND when no tasks are running. I suspect it should change to $tasks_this_level > @freeslot...?

  while (!@freeslot
         ||
         (@slot > @freeslot && $todo_ptr+1 > $#jobstep_todo))

(I think the part after || here is trying to say "don't walk off the end of the todo list and proclaim end-of-this-round while there are any tasks still running which might give us more todo's -- either by queueing more tasks, or by failing".)

Actions #2

Updated by Brett Smith about 7 years ago

Tom Clegg wrote:

I think it's worth adding a comment near my $tasks_this_level ... [a] explaining that the reason we do this is to increase the per-task memory limit when ntasks<nslots, and [b] acknowledging this artificially reduces concurrency in the case where level L starts with few tasks but then adds more tasks at level L. Renaming $tasks_this_level to $max_slots_this_level might help here too (as named, it can go out of sync with reality, but as $max_slots_this_level it is accurate throughout its scope).

Given the value actually in the variable, and other changes you suggested, $max_slots_this_level isn't quite right either. Renamed to $initial_tasks_this_level.

Undecided whether it's worth asking for a log message when we detect the "artificially reduces concurrency" situation (something like @jobstep_todo > 0 && $tasks_this_level < @slot ?).

Since you're undecided, I took the cheap option and added the number of slots we're using to the "start level" log message. It's not quite as explicit, but the reader should be able to notice the situation without any extra logic on our part.

Is this still OK? @slot > @freeslot used to mean "any tasks are running" but now it can be true when no tasks are running, so I wonder if this can cause us to get stuck in THISROUND when no tasks are running. I suspect it should change to $tasks_this_level > @freeslot...?

I think you're right there's likely a bug here. It now compares against the initial size of @freeslot instead.

Implemented all your other proposals as written, and we're now at 55bc432. Thanks.

Actions #3

Updated by Tom Clegg about 7 years ago

55bc432 LGTM, thanks

Actions #4

Updated by Brett Smith about 7 years ago

  • Status changed from In Progress to Resolved
  • % Done changed from 0 to 100

Applied in changeset arvados|commit:d2e7a97c8d24ef8ae61d860e9c972626f80cf2b4.

Actions

Also available in: Atom PDF