Project

General

Profile

Pathomap tutorial » History » Version 4

Bryan Cosca, 02/13/2015 06:51 PM

1 2 Bryan Cosca
h1. Running Pathomap using Arvados
2
3
This tutorial demonstrates how to run the Pathomap pipeline using the example that the Mason Lab provides at their "page":http://www.pathomap.org/. PathoMap is a research project by Weill Cornell Medical College to study the microbiome and metagenome of the built environment of NYC.  The Pathomap publication is available here: "Afshinnekoo et al., Geospatial Resolution of Human and Bacterial Diversity with City-Scale Metagenomics, CELS
4 4 Bryan Cosca
(2015)":http://dx.doi.org/10.1016/j.cels.2015.01.001 This tutorial introduces the following Arvados features:
5 2 Bryan Cosca
6
* How to run Pathomap using Arvados
7
* How to access your pipeline results.
8
* How to browse and select your input data for lobSTR and submit re-run the pipeline.
9
10
# Start at the "Curoverse":https://curoverse.com/ website and click Log In at the top. We currently support all Google / Google Apps accounts for authentication. By simply choosing a Google-based account, your account will be automatically created and redirect to the "Arvados Workbench":https://workbench.qr1hi.arvadosapi.com/.
11
# In the *Active pipelines* panel, click on the *Run a pipeline...* button. Doing so opens a dialog box titled *Choose a pipeline to run*.
12
# Select *lobstr v.3* and click the *Next: choose inputs* button. Doing so loads a new page to supply the inputs for the pipeline.
13
# The default inputs from the lobSTR source code repository are already pre-loaded. Click on the *Run* button. The page updates to show you that the pipeline has been submitted to run on the Arvados cluster.
14
# After the pipeline starts running, you can track its progress by watching log messages from jobs.  This page refreshes automatically.  You will see a complete label under the job the column when the pipeline completes successfully. The current run time of the job in CPU and clock hours is also displayed. You can view individual job details by clicking on the job name.
15
# Once the job is finished, the output can be viewed to the right of the run time.
16
# Click on the download button to the right of the file to download your results, or the magnifying glass to quickly view your results.
17
18
h2. Uploading data through the web and using it on Arvados
19
20
# In your home project, click on the blue *+ Add data* button in the top right.
21
# Click *Upload files from my computer*
22
# Click *Choose Files* and choose the 2 paired end fastq files you would like to run lobSTR on.
23
# Once you're ready, click *> Start*
24
# Feel free to rename your Collection so you can remember it later. Click on the pencil icon in the top left corner next to *New collection*
25
# Once that is uploaded, navigate back to the dashboard and click on *Run a pipeline...* and choose lobstr v.3.
26
# You can change the input by clicking on the *[Choose]* button next to the *Input fastq collection ID*.
27
# Click on the dropdown menu, click on your newly-created project, and choose your desired input collection. Click *OK* and *Run* to run lobSTR v.3 on your data!
28
29
h2. Uploading data through your shell and using it on Arvados
30
31
Full documentation can be found "here":http://doc.arvados.org/user/tutorials/tutorial-keep.html
32
33
# Install the "Arvados Python SDK":http://doc.arvados.org/sdk/python/sdk-python.html on the system from which you will upload the data (such as your workstation, or a server containing data from your sequencer). Doing so will install the Arvados file upload tool, arv-put.
34
# To configure the environment with the Arvados instance host name and authentication token, see "here":http://doc.arvados.org/user/reference/api-tokens.html 
35
# Navigate back to your Workbench dashboard and create a new project by clicking on the Projects dropdown menu and clicking Home. 
36
# Click on [+ Add a subproject]. Feel free to edit the Project name or description by clicking the pencil to the right of the text.
37
# To add data, return to your shell, create a folder, and put the two paired-end fastq files you want to upload inside. Use the command arv-put * --project-uuid qr1hi-xxxxx-yyyyyyyyyyyyyyy. The qr1hi tag can be found in the url of your new project. This ensures that all the files you would like to upload are in one collection.
38
# The output value xxxxxxxxxxxxxxxxxxxx+yyyy is the Arvados collection locator that uniquely describes this file.
39
# Once that is uploaded, navigate back to the dashboard and click on *Run a pipeline...* and choose lobstr v.3.
40
# You can change the input by clicking on [Choose] next to the *Input fastq collection ID*.
41
# Click on the dropdown menu, click on your newly-created project, and choose your desired input collection. Click *OK* and *Run* to run lobSTR v.3 on your data!
42
43
h3. FAQ
44
45
* Does this support both paired-end and single-end reads?
46
** Currently, the pipeline template only supports paired-end reads. If you would like to run a single-end read experiment, please email support@curoverse.com and tell us about your project. You can also copy the template yourself and edit the commands! Documentation is provided "here":http://doc.arvados.org/user/index.html
47
48
* What type of files does this support?
49
** It supports any FASTQ files with a variety of names as long as they contain the string "1.f" and "2.f". .fq, .fas, and .fastq are all supported.
50
51
* Can this run multiple samples at once?
52
** We are currently working on supporting batch processing of multiple samples, and it will be ready soon.