Introduction to Arvados¶
Arvados is a platform for storing, organizing, processing, and sharing genomic and other big data. The platform is designed to make it easier for data scientists to develop analyses, developers to create genomic web applications and IT administers to manage large-scale compute and storage genomic resources. The platform is designed to run in the cloud or on your own hardware.
The core technology has been under development at Harvard Medical School for many years (see history). We are now in the process of refactoring original code, refactoring the APIs, and developing significant new capabilities.
A set of relatively low-level compute and data management functions are consistent across a wide range of analysis pipelines and applications that are being built for genomic data. Unfortunately, every organization working with these data have been forced to build their own custom systems for these low level functions. At the same time, there are proprietary platforms emerging that seek to solve these same problems. Arvados was created to provide a common solution across a wide range of applications that would be free and open source.
The Arvados platform seeks to solve a set of common problems faced by informaticians and IT Organizations:Benefits to informaticians:
- Make authoring analyses and constructing pipelines in any language as efficient as possible
- Provide an environment that can run open source and commercial tools (e.g. Galaxy, GATK, etc.)
- Enable deep provenance and reproducibility across all pipelines
- Provide a way to flexibly organize data and ensure data integrity
- Make queries of variant and other compact genome data very high-performance
- Create a simple way to run distributed batch processing jobs
- Enable the secure sharing of data sets from small to very large
- Provide a set of common APIs that enable application and pipeline portability across systems
- Offer a reference environment for implementation of standards
- Standardize file format translation
- Low total cost of ownership
- Eliminate unnecessary data duplication
- Ability to create private, on-premise clouds
- Self-service provisioning of resources
- Ability to utilize low-cost off the shelf hardware
- Easy-to-manage horizontally scaling architecture
- Straight-forward browser-based administration
- Provide facilities for hybrid (public and private) clouds
- Ensure full compliance with security and regulatory standards
- Support data sets from tens of terabytes to exabytes
Functionally, Arvados has two major sets of capabilities: (a) data management and (b) compute management.
The data management services are designed to handle all of the challenges associated with storing and organizing large omic data sets. The heart of theses services is the Data Manager, which brokers data storage. The data management system is designed to handle the following needs:
- Store files (e.g. BAM, FASTQ, VCF, etc.) reliably
- Store metadata about files for a wide variety of organizational schema
- Create collections (sets of files) that can be used in analyses
- Ensure files are not unnecessarily duplicated
- Track provenance (sources and methods used to produce data)
- Control who can access which files
- Offer reliable distributed storage using inexpensive commodity disks
- Control storage redundancy based on importance of datasets
The compute management services are designed to handle the challenges associated with creating and running pipelines as large scale distributed processing jobs.
- Enable a common way to represent pipelines (JSON)
- Support the use of any pipeline creation tool
- Keep all pipeline code in a revision control system (git repository)
- Run pipelines as distributed computations using MapReduce
- Easily and reliably retrieve pipeline outputs
- Store a record of every pipeline that is run
- Eliminate the need to re-run pipeline components that have already been run
- Easily and reliably re-run and verify any past pipeline
- Create a straightforward way to author web applications that use underlying data and pipelines
- Easily share results, pipelines, and applications between systems
- Run distributed computations across clusters in different data centers to make use of very large data sets
The compute management system also includes a sub-component for doing tertiary analysis. This component provides an in-memory database for very high-performance queries of a compact representation of a genome that includes variants and other relevant data needed for tertiary analysis. (This component is in the design stage.)
Arvados works best in an environment where informaticians receive access to virtual machines (VMs) on a private or public cloud. This approach eliminates the need to manage separate physical servers for different projects, significantly increasing the utilization of underlying hardware resources. It also gives informaticians a great deal of freedom to choose the best operating systems and tools for their work. With virtual machines, each informatician or project team has full isolation, security, autonomy, and privacy for their work.
The Arvados platform provides shared common services that can be used from within a virtual machine. All of the Arvados services are accessible through APIs.
APIs and SDKs¶
Arvados is designed so all of the data management and compute management services can be accessed through a set of a consistent APIs and interfaces. Most of the functionality is represented in a set of REST APIs. Some components use native interfaces (notably Keep and git). Arvados provides SDKs for popular languages (Python, Perl, Ruby, R, and Java) as well as a standalone tool for command line use.
Arvados includes a browser-based UI which provides a convenient way to do common browsing and searching tasks. Workbench also serves as an application portal, providing a point of access to applications running on Arvados.
Technical Architecture showing key components