Make gatk3 pipeline template
- Create a project
- Download and add the appropriate reference & example datasets to the project
- Make a docker image with all of the relevant redistributable software pre-installed
- Make a "pirs" crunch script and use it to generate the simulation dataset based on hg19 chr1
- Make a "Single sample SNV with bwa and gatk" pipeline with no parallel/asynchronous tasks
- Time permitting, make another pipeline that splits the inputs as described below in order to get faster turnaround time when multiple nodes are available.
- Use FUSE mount for inputs
- GATK3 (like attached) not GATK2 (like existing pipeline)
- Use a docker image with redistributable tools pre-installed, assuming this makes things easier (but not GATK itself - continue to pass this tarball as a job input)
- Use the file-select script to get appropriate bits from the GATK bundle (which we should have an entire copy of in our project), rather than downloading individual files needed.
- Existing pipeline provides clues (not necessarily all correct with latest tool versions) about which tools are capable of reading/writing pipes rather than regular files.
Notes about parallelizing:
We can split the FASTQ into many chunks as we want, however after the mapping, we should merge the alignments from one sample into single SAM/BAM file to stack the reads on each genome position. Then we split the SAM/BAM file again by chromosome. So roughly speaking we can get 24 or 25 BAM fragments then all downstream steps could be applied on these chromosome based BAM fragments. At last, probably after annotation, we merge fragment files into one final file. To increase parallelism we can even split the BAM on positions where have very low/no coverage.
#6 Updated by Peter Amstutz about 5 years ago
- No modal chooser for individual files. Workaround: select the files in a different window using the paperclip, then they show up in selection dropdown.
- Pipelines created from pipeline templates don't get added to the same folder.
- Job parameters passed to docker need to be JSON encoded
- An "arv migrate" command for copying objects and collections between arvados instances seems like it would be a good idea.
- if you try to queue a job while another one is running and it's running locally (not using slurm), it tries to run it immediately and then fails because the job directory is locked