Project

General

Profile

Container dispatch » History » Version 2

Peter Amstutz, 12/02/2015 07:48 PM

1 1 Peter Amstutz
h1. Crunch2 dispatch
2 2 Peter Amstutz
3
h2. Framework
4
Suggest writing crunch 2 job dispatcher as a new set of actors in node manger.
5
6
This would enable us to solve the question of communication between the scheduler and cloud node management (#6520).
7
8
Node manager already has a lot of the framework we will want like concurrency (can have one actor per job) and a configuration system.
9
10
Different schedulers (slurm, sge, kubernetes) can be implemented as modules similarly to how different cloud providers are supported now.
11
12
h2.  Interaction with API
13
14
More ideas:
15
16
Have a "dispatchers" table. Dispatcher processes are responsible for pinging the API server similar to how it is done for nodes to show they are alive.
17
18
A dispatcher claims a container by setting "dispatcher" field to it's UUID. This field can only be set once and that locks the record so that only the dispatcher can update it.
19
20
If a dispatcher stops pinging, the containers it has claimed should be marked as TempFail.
21
22
Dispatchers should be able to annotate containers (preferably through links) for example "I can't run this because I don't have any nodes with 40 GiB of RAM".
23
24
h2. Retry
25
26
How do we handle failure? Is the dispatcher required to retry containers that fail, or is the dispatcher a "best effort" service and the API decides to retry by scheduling a new container?
27
28
Currently the container_uuid field only holds a single container_uuid at a time. If the API schedules a new container, does that mean any container requests associated with that container get updated with the new container?
29
30
If the container_uuid field only holds one container at a time, and container don't link back to the container requests that created, then we don't have a way to record of past attempts to fulfill this request. This means we don't have anything to check against container_count_max. A few possible solutions:
31
32
* Make container_uuid an array of containers created to fulfill a given container request (this introduces complexity)
33
* Decrement container_count_max on the request when submitting a new container
34
* Compute content address of the container request and discover containers with that content address. This would conflict with "no reuse" or "impure" requests which are supposed to ignore past execution history. Could solve this by salting the content address with a timestamp; "no reuse" containers would never ever be reusable which might be fine.
35
36
I think we should distinguish between infrastructure failure and task failure by distinguishing between "TempFail" and "PermFail" in the container state. "TempFail" shouldn't count againt the container_count_max count, or alternately we only honor container_count_max for "TempFail" tasks and don't retry "PermFail".
37
38
Ideally, "TempFail" containers should retry forever, but with a backoff. One way to do the backoff is to schedule the container to run at a specific time in the future.
39
40
h2. Scheduling
41
42
Having a field specifying "wait until time X to run this container" would be generally useful for cron-style tasks.