Project

General

Profile

Keep service hints » History » Revision 7

Revision 6 (Tom Clegg, 03/25/2015 10:16 PM) → Revision 7/13 (Tom Clegg, 03/25/2015 10:29 PM)

h1. Keep service hints 

 h3. Objective 

 Users should be able to create, manage, share, and run jobs on collections whose underlying data is stored in remote services like Amazon S3 and Google Cloud Storage. Users (and their compute jobs) should use the same tools and interfaces, regardless of whether the data is stored in such a remote service or natively in Keep. 

 Examples: 
 * A compute job that processes locally stored data should not have to be modified at all in order to process remote data. 
 * A user should be able to use Workbench to share a collection with another user, without knowing whether the underlying data is stored locally or in a remote service. 
 * Arvados should be able to move data from one storage system to another without disrupting users. For example, the portable_data_hash of a collection must not change when the underlying data moves. 
 * It should be possible for a collection to reference some data stored in remote service A, some data stored in remote service B, and some data stored on local Keep disks. 

 Service hints provide the mechanism used by Keep client programs to access data through Keep gateway services. How the gateway services work is not covered here: this document only addresses how clients know _when_ to use a gateway service, and _which one_ to use. 

 Service hints do not address the matter of _writing_ data to remote services, or de-duplicating writes across various services: writes: if a client reads some data from a remote service using arv-get, and writes it back using arv-put, this will result in an additional local copy. 

 h3. Background 

 *Using remote data* 

 Currently, in order to use Arvados to work with data stored in a remote service (e.g., use it as an input to a Crunch job), a user must download it from the remote service and store it in Keep, typically using a shell VM as an intermediary. 

 <pre> 
 curl https://... | arv-put - 
 </pre> 

 This has serious drawbacks: 
 * It is slow. A user cannot start working with the data until the entire dataset has been downloaded into Keep. 
 * It is inconvenient. A user must figure out which data _might_ be used in a given process, and download all of it to Keep before starting. 
 * It uses a lot of storage space (in a typical use case, this approach stores two additional copies of the data on Keep disks, even though the user does not desire additional replication beyond what is provided by the remote service). 

 *Client behavior* 

 When a client reads a data block referenced by a manifest, it requests a list of "available keep services" from the API server and (if there is more than one "disk" service on the list) uses the rendezvous hashing algorithm to select one. 

 An existing feature uses a hint of the form @+Kzzzzz@ (where zzzzz is an Arvados site prefix) to provide transparent read-only access to data stored at a remote Arvados instance. 

 "Service hints" extend this approach by allowing the manifest to specify a Keep service endpoint for a data block. 

 h3. Alternatives 

 Client libraries could communicate directly with non-Keep services. 
 * It would be impossible to use Arvados permission controls. 
 * An N&times;M array of code would have to be maintained in order to support N backing services from M SDK languages. 
 * The API server would have to maintain the mapping of hashes to remote data objects (and permissions for this map). 
 * It would be much more difficult (or impossible) to monitor usage. 

 Each keepstore server could know how to communicate with each non-Keep service in use. 
 * Simpler client code. 
 * Artificial link between keep disk services and gateway services (they couldn't be independently scaled or shut down for maintenance). 
 * External clients couldn't be given direct access to the third-party gateway services without also giving them direct access to the disk services. 
 * Either the keepstore servers would have to keep their hash-to-remote-object mappings synchronized -- or the map of hash to remote service would be distributed across various servers. Either way introduces an unsuitable level of complexity: unlike in a native keepstore system, the underlying data is expected to change over time. 
 * When encountering an error (notably 404), client code would make many redundant attempts to read from various gateway services, based on the mistaken assumption that the various services have different sets of available data blocks. 

 h3. High level design 

 Clients interact with remote services through Keep gateway services. A gateway server responds to GET requests using the same protocol as a keepstore server. Instead of reading data from a local disk, though, it reads data from a remote service. Generally this means it maintains a local database mapping Keep locators (hashes) to remote data locators (and possibly credentials). From the client's perspective, its behaves exactly the same as any other keep service: @"GET /locator"@ returns either an error, or a data block whose MD5 hex digest is equal to the first 32 characters of the locator. 

 This means tools (see [[Keep S3 gateway]]) can create manifests with @+Kuuid@ hints, referencing data in remote storage services by indicating the UUID of a storage gateway capable of accessing it. 

 Each client library, when encountering a locator with a @+Kuuid@ hint, skips the usual rendezvous hashing algorithm. Instead of requesting a list of available services from the API server and sorting them in rendezvous order, it requests the particulars for the one specified service, and connects to it in order to retrieve the data. 

 Aside from the choice of which Keep service to contact, the form and semantics of the "retrieve data" transaction are unchanged. 

 h2. Specifics 

 h3. Detailed design 

 A block locator provided by the API server in a manifest might have a hint of the form @+Kuuid@ where @uuid@ is the UUID of a keep service. In order to retrieve the block data, the client should look up the keep service with the given UUID, and perform an HTTP @GET@ request at the appropriate host and port. 

 * Given @acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt-bi6l4-20fty0xbp8l9wwe@, 
 ** Retrieve @https://1h9kt.arvadosapi.com/arvados/v1/keep_services/1h9kt-bi6l4-20fty0xbp8l9wwe@ to determine scheme, host, port 
 ** Retrieve data from @{scheme}://{host}{port}/acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt-bi6l4-20fty0xbp8l9wwe@ 

 As before, if a hint of the form @+K{prefix}@ is given (where @{prefix}@ is a string of five characters in @[0-9a-z]@), the client should perform a @GET@ request at @https://keep.{prefix}.arvadosapi.com/locator@. 

 * Given @acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt@, 
 ** Retrieve data from @https://keep.1h9kt.arvadosapi.com/acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt@ 

 h2. Future work 

 Arvados could manage service hints actively: for example, data manager could tag blocks with S3 bucket names, and API server could load-balance S3 gateways by selecting one of several available gateway UUIDs for a given block. 

 Data manager could update manifests to reflect additional locations where data blocks can be retrieved: for example, @+Kuuid1+Kuuid2@ to signify that multiple remote gateways can retrieve the data, or @+K+Kuuid1@ to signify that the data is available locally _and_ via a remote gateway. 

 A gateway could permit Keep clients to write to a remote service. Service hints don't exist when data is being written, so clients would need some other way to decide when to write to a gateway server instead of a regular Keep disk/proxy service.