Pipelines as jobs » History » Revision 4

« Previous | Revision 4/18 (diff) | Next »
Peter Amstutz, 09/02/2014 03:48 PM

Pipeline runner as job


arv-run-pipeline-instance is currently a special priviledged pipeline runner. However, there are potentially many other pipeline runners we would like to support, such as bcbio-nextgen, rmake, snakemake, Nextflow, etc.

The current solution is to run these as a normal job. That job either a) submits stages as subtasks or b) submits stages as additional jobs. The problem with (a) is that job reuse features are not available, and all subtasks must be able to run out of the same docker image. The problem with (b) is that the controller job currently ties up a whole node, even though it is generally idle, and we currently do not track the process tree (which job submissions were made by which other jobs.)

Proposed solution

Remove arv-run-pipeline-instance from its priviledged position and run it as a job in a container just like everything else. Fix crunch-dispatch so the pipeline runner job only takes up a single slot and other jobs or tasks can be scheduled on the node. Use the API token associated with the job to track which job submissions were made by the controlling job. Unify the display of jobs and pipelines so that a pipeline is just a job that creates other jobs.

Another benefit: supports the proposed v2 Python SDK by enabling users to orchestrate pipelines where "python" is the same whether it runs locally, runs locally and submits jobs, or runs as a crunch job itself and submits jobs.

Related ideas

Currently, porting tools like bcbio or rmake still requires the tool be modified so that it schedules jobs on the cluster instead of runnig locally. We could use LD_PRELOAD to intercept a whitelist of exec() calls and redirect them to a script that causes the tool to run on the cluster.

Updated by Peter Amstutz almost 10 years ago · 4 revisions