Project

General

Profile

Pipelines as jobs » History » Revision 4

Revision 3 (Peter Amstutz, 09/02/2014 12:04 PM) → Revision 4/18 (Peter Amstutz, 09/02/2014 03:48 PM)

h1. Pipeline runner as job 

 h2. Problem 

 arv-run-pipeline-instance is currently a special priviledged pipeline runner.    However, there are potentially many other pipeline runners we would like to support, such as bcbio-nextgen, rmake, snakemake, Nextflow, etc. 

 The current solution is to run these as a normal job.    That job either a) submits stages as subtasks or b) submits stages as additional jobs.    The problem with (a) is that job reuse features are not available, and all subtasks must be able to run out of the same docker image.    The problem with (b) is that the controller job currently ties up a whole node, even though it is generally idle, and we currently do not track the process tree (which job submissions were made by which other jobs.) 

 

 h2. Proposed solution 

 Remove arv-run-pipeline-instance from its priviledged position and run it as a job in a container just like everything else.    Fix crunch-dispatch so the pipeline runner job only takes up a single slot and other jobs or tasks can be scheduled on the node.    Use the API token associated with the job to track which job submissions were made by the controlling job.    Unify the display of jobs and pipelines so that a pipeline is just a job that creates other jobs. 

 Another benefit: supports the proposed v2 Python SDK by enabling users to orchestrate pipelines where "python program.py" is the same whether it runs locally, runs locally and submits jobs, or runs as a crunch job itself and submits jobs. 

 


 h2. Related ideas 

 Currently, porting tools like bcbio or rmake still requires the tool be modified so that it schedules jobs on the cluster instead of runnig locally.    We could use LD_PRELOAD to intercept a whitelist of exec() calls and redirect them to a script that causes the tool to run on the cluster.