Project

General

Profile

Keep service hints » History » Version 8

Tom Clegg, 03/26/2015 04:23 AM

1 1 Tom Clegg
h1. Keep service hints
2
3
h3. Objective
4
5 4 Tom Clegg
Users should be able to create, manage, share, and run jobs on collections whose underlying data is stored in remote services like Amazon S3 and Google Cloud Storage. Users (and their compute jobs) should use the same tools and interfaces, regardless of whether the data is stored in such a remote service or natively in Keep.
6 1 Tom Clegg
7 4 Tom Clegg
Examples:
8
* A compute job that processes locally stored data should not have to be modified at all in order to process remote data.
9
* A user should be able to use Workbench to share a collection with another user, without knowing whether the underlying data is stored locally or in a remote service.
10
* Arvados should be able to move data from one storage system to another without disrupting users. For example, the portable_data_hash of a collection must not change when the underlying data moves.
11
* It should be possible for a collection to reference some data stored in remote service A, some data stored in remote service B, and some data stored on local Keep disks.
12
13 7 Tom Clegg
Service hints provide the mechanism used by Keep client programs to access data through Keep gateway services. How the gateway services work is not covered here: this document only addresses how clients know _when_ to use a gateway service, and _which one_ to use.
14
15
Service hints do not address the matter of _writing_ data to remote services, or de-duplicating writes across various services: if a client reads some data from a remote service using arv-get, and writes it back using arv-put, this will result in an additional local copy.
16 6 Tom Clegg
17 1 Tom Clegg
h3. Background
18
19 5 Tom Clegg
*Using remote data*
20
21
Currently, in order to use Arvados to work with data stored in a remote service (e.g., use it as an input to a Crunch job), a user must download it from the remote service and store it in Keep, typically using a shell VM as an intermediary.
22
23
<pre>
24
curl https://... | arv-put -
25
</pre>
26
27 1 Tom Clegg
This has serious drawbacks:
28 6 Tom Clegg
* It is slow. A user cannot start working with the data until the entire dataset has been downloaded into Keep.
29
* It is inconvenient. A user must figure out which data _might_ be used in a given process, and download all of it to Keep before starting.
30 5 Tom Clegg
* It uses a lot of storage space (in a typical use case, this approach stores two additional copies of the data on Keep disks, even though the user does not desire additional replication beyond what is provided by the remote service).
31
32
*Client behavior*
33
34 4 Tom Clegg
When a client reads a data block referenced by a manifest, it requests a list of "available keep services" from the API server and (if there is more than one "disk" service on the list) uses the rendezvous hashing algorithm to select one.
35 1 Tom Clegg
36 5 Tom Clegg
An existing feature uses a hint of the form @+Kzzzzz@ (where zzzzz is an Arvados site prefix) to provide transparent read-only access to data stored at a remote Arvados instance.
37 1 Tom Clegg
38 5 Tom Clegg
"Service hints" extend this approach by allowing the manifest to specify a Keep service endpoint for a data block.
39
40 1 Tom Clegg
h3. Alternatives
41
42
Client libraries could communicate directly with non-Keep services.
43
* It would be impossible to use Arvados permission controls.
44
* An N&times;M array of code would have to be maintained in order to support N backing services from M SDK languages.
45
* The API server would have to maintain the mapping of hashes to remote data objects (and permissions for this map).
46 6 Tom Clegg
* It would be much more difficult (or impossible) to monitor usage.
47 1 Tom Clegg
48
Each keepstore server could know how to communicate with each non-Keep service in use.
49
* Simpler client code.
50
* Artificial link between keep disk services and gateway services (they couldn't be independently scaled or shut down for maintenance).
51
* External clients couldn't be given direct access to the third-party gateway services without also giving them direct access to the disk services.
52
* Either the keepstore servers would have to keep their hash-to-remote-object mappings synchronized -- or the map of hash to remote service would be distributed across various servers. Either way introduces an unsuitable level of complexity: unlike in a native keepstore system, the underlying data is expected to change over time.
53
* When encountering an error (notably 404), client code would make many redundant attempts to read from various gateway services, based on the mistaken assumption that the various services have different sets of available data blocks.
54
55
h3. High level design
56
57 6 Tom Clegg
Clients interact with remote services through Keep gateway services. A gateway server responds to GET requests using the same protocol as a keepstore server. Instead of reading data from a local disk, though, it reads data from a remote service. Generally this means it maintains a local database mapping Keep locators (hashes) to remote data locators (and possibly credentials). From the client's perspective, its behaves exactly the same as any other keep service: @"GET /locator"@ returns either an error, or a data block whose MD5 hex digest is equal to the first 32 characters of the locator.
58 1 Tom Clegg
59 6 Tom Clegg
This means tools (see [[Keep S3 gateway]]) can create manifests with @+Kuuid@ hints, referencing data in remote storage services by indicating the UUID of a storage gateway capable of accessing it.
60
61 5 Tom Clegg
Each client library, when encountering a locator with a @+Kuuid@ hint, skips the usual rendezvous hashing algorithm. Instead of requesting a list of available services from the API server and sorting them in rendezvous order, it requests the particulars for the one specified service, and connects to it in order to retrieve the data.
62
63
Aside from the choice of which Keep service to contact, the form and semantics of the "retrieve data" transaction are unchanged.
64 1 Tom Clegg
65
h2. Specifics
66
67 6 Tom Clegg
h3. Detailed design
68
69 2 Tom Clegg
A block locator provided by the API server in a manifest might have a hint of the form @+Kuuid@ where @uuid@ is the UUID of a keep service. In order to retrieve the block data, the client should look up the keep service with the given UUID, and perform an HTTP @GET@ request at the appropriate host and port.
70 1 Tom Clegg
71 2 Tom Clegg
* Given @acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt-bi6l4-20fty0xbp8l9wwe@,
72
** Retrieve @https://1h9kt.arvadosapi.com/arvados/v1/keep_services/1h9kt-bi6l4-20fty0xbp8l9wwe@ to determine scheme, host, port
73
** Retrieve data from @{scheme}://{host}{port}/acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt-bi6l4-20fty0xbp8l9wwe@
74
75
As before, if a hint of the form @+K{prefix}@ is given (where @{prefix}@ is a string of five characters in @[0-9a-z]@), the client should perform a @GET@ request at @https://keep.{prefix}.arvadosapi.com/locator@.
76
77 3 Tom Clegg
* Given @acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt@,
78 1 Tom Clegg
** Retrieve data from @https://keep.1h9kt.arvadosapi.com/acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt@
79
80
h2. Future work
81
82 8 Tom Clegg
Arvados could manage service hints actively: for example, data manager could tag blocks with S3 bucket names, and API server could load-balance S3 gateways by selecting one of several available gateway UUIDs for a given block. (This would not require any further changes in client libraries.)
83 6 Tom Clegg
84 8 Tom Clegg
Data manager could update manifests to reflect additional locations where data blocks can be retrieved: for example, @+Kuuid1+Kuuid2@ to signify that multiple remote gateways can retrieve the data, or @+K+Kuuid1@ to signify that the data is available locally _and_ via a remote gateway. (This would require some backward-compatible changes in client libraries.)
85 6 Tom Clegg
86
A gateway could permit Keep clients to write to a remote service. Service hints don't exist when data is being written, so clients would need some other way to decide when to write to a gateway server instead of a regular Keep disk/proxy service.