Project

General

Profile

Pipelines as jobs » History » Revision 9

Revision 8 (Peter Amstutz, 09/18/2014 03:56 PM) → Revision 9/18 (Peter Amstutz, 09/18/2014 04:04 PM)

h1. Everything is a job 

 

 h2. Problem 

 Currently we have tasks, jobs, and pipelines.    While this corresponds to a common pattern for building bioinfomatics analysis, in practice we are finding that this design is overly rigid with several unintended consequences: 

 # arv-run-pipeline-instance is currently a special privileged pipeline runner.    However, there are potentially many other pipeline frameworks we would like to support, such as bcbio-nextgen, rmake, snakemake, Nextflow, etc. that should be usable by regular users and so can't be privileged processes. 
 # Need to work with batches and pipelines of pipelines.    If we have a pipeline that processes a single sample, and we want to run it 100 times, currently we need to create 100 pipelines by hand or hand, by a script. script that runs outside of the system, or using a separate job. 
 # Currently, we can create jobs which either 1. submits stages as subtasks or 2. submits stages as additional jobs. 
 ## In the first approach, job reuse features are not available with tasks, and all subtasks must be able to run out of the same docker image.    There is also (by design) reduced visibility into the inner working of tasks as compared to jobs. 
   
 ## In the second approach, the controller job currently ties up a whole node, even though it is mostly idle.    Additionally (and unlike tasks idle, and pipelines) we currently do not track which the process tree (which job submissions were made by which other jobs, so there's a loss of provenance information. 


 jobs.) 

 h2. Proposed solution 

 # Improve job scheduling so that we can have more than one job on a node, with jobs can be allocated to a single core (possibly even fractions of a core). 
 # Remove arv-run-pipeline-instance from its privileged position and run it as a job in a container just like everything else. 
 # Deprecate tasks, prefer to submit jobs instead (enables work reuse) 
 # Use the API token associated with the job to track which job submissions were made by the controlling job (add a spawned_by_job_uuid field to the jobs object).    Top level UI just displays jobs that were submitted by a user (spawned_by_job_uuid is null).    Unify the display of pipelines and jobs so that a pipeline is just a job that creates other jobs. 

 Another benefit: supports the proposed v2 Python SDK by enabling users to orchestrate pipelines where "python program.py" is the same whether it runs locally, runs locally and submits jobs, or runs as a crunch job itself and submits jobs. 

 h2. Related ideas 

 Currently, porting tools like bcbio or rmake still requires the tool be modified so that it schedules jobs on the cluster instead of running locally.    We could use LD_PRELOAD to intercept a whitelist of exec() calls and redirect them to a script that causes the tool to run on the cluster.