Distributed workflows » History » Revision 4
« Previous |
Revision 4/7
(diff)
| Next »
Peter Amstutz, 03/08/2018 03:07 PM
Distributed workflows¶
Problem description¶
A user wants to run a meta-analysis on data located on several different clusters. For either efficiency or legal reasons, the data should be analyzed in place and the results aggregated and returned to a central location. The user should be able to express the multi-cluster computation as a single CWL workflow, and no manual intervention should be required while the workflow is running.
Simplifying assumptions¶
User explicitly indicates in the workflow which cluster a certain computation (data+code) happens.
Data transfer only occurs between the primary cluster and the secondary clusters, not between secondary clusters.
Proposed solution¶
Run subworkflow on cluster¶
A workflow step can be given a CWL hint "RunOnCluster". This indicates the tool or subworkflow run by the workflow step should run on a specific Arvados cluster, instead of submitted to the cluster that the workflow runner is currently running on. The implementation of this would be similar to the "RunInSingleContainer" feature, constructing a container request to run the workflow runner on the remote cluster and wait for results.
Data transfer¶
In order for the workflow to run successfully on the remote cluster, it needs its data dependencies (docker images, scripts, reference data, etc). These are several options:
- Don't do any data transfer of dependencies. Workflows will fail if dependencies are not available. User must manually transfer collections using arv-copy.
- pros: least work
- cons: terrible user experience. workflow patterns that transfer data out to remote clusters don't work.
- Distribute dependencies as part of workflow registration (requires proactively distributing dependencies to every cluster that might ever need it).
- pros: less burden on user compared to option (1)
- cons: doesn't guarantee the dependencies are available where needed, --create/update-workflow option of arvados-cwl-runner has to orchestrate upload of data to every cluster in the federation. workflow patterns that transfer data out to remote clusters don't work.
- Workflow runner determines which dependencies are missing from the remote cluster and pushes them before scheduling the subworkflow.
- pros: no user intervention required, only copy data to clusters that we think will need it
- cons: copies all dependencies regardless of whether they are actually used, requires that the primary runner have all the dependencies, or is able to facilitate transfer from some other cluster
- Workflow runner on remote cluster determines which dependencies are missing and pulls them from federated peers on demand.
- pros: no user intervention required, only copy data we actually need
- cons: requires that the primary runner have all the dependencies, or is able to facilitate transfer from some other cluster
- Federated access to collections, fetch data blocks on demand from another cluster
- pros: only fetch data blocks that are actually needed, no collection record copy in remote database
- cons: requires SDK improvements to handle multiple clusters, requires caching proxy to avoid re-fetching the same block (for example, if 100 nodes are all trying to run a docker image from a federated collection).
- Hybrid federation, copy a collection to remote cluster but retain UUID/permission from source
- pros: no user intervention, only fetch blocks we need, fetch data blocks from local keep if available, remote keep if necessary
- cons: semantics/permission model for "cached" collection records are not yet defined.
Notes¶
Options 1 and 2 cannot support workflows that involve some local computation, and then passing intermediate results to a remote cluster for computation.
Options 2, 3 and 4 involve a similar level of effort, mainly involving arvados-cwl-runner. Of these, option 4 seems to cover the most use cases. A general "transfer required collections" method will cover data transfer for dependencies, intermediate collections, and outputs.
Option 5 involves adding federation features to the Python/Go SDKs, arv-mount, and crunch-run. It also may require a new keep infrastructure such as a caching proxy service for keep to avoid redundantly transferring the same blocks over the internet.
Option 6 level of effort is probably somewhere between options 4 and 5.
Outputs¶
Finally, after a subworkflow runs on a remote cluster, the primary cluster needs to access the output and possibly run additional steps. This requires accessing a single output collection, either by pulling it to the primary cluster (using the same features supporting option 4), or by federation (options 5, 6).
Updated by Peter Amstutz almost 7 years ago · 7 revisions