Keep service hints



Service hints are the mechanism used by Keep client libraries to read data that is not stored in Keep, by making use of Keep gateway services.

How the gateway services work is not covered here: this document only addresses how clients should decide when to use a gateway service, and which one to use.

Service hints do not address the matter of writing data to remote services, or de-duplicating writes across various services: if a client reads some data from a remote service using arv-get, and writes it back using arv-put, this will result in an additional local copy.

The intended audience for this document is software engineers.



Users should be able to create, manage, share, and run jobs on collections whose underlying data is stored in remote services like Amazon S3 and Google Cloud Storage. Users (and their compute jobs) should use the same tools and interfaces, regardless of whether the data is stored in such a remote service or natively in Keep.

  • A compute job that processes locally stored data should not have to be modified at all in order to process remote data.
  • A user should be able to use Workbench to share a collection with another user, without knowing whether the underlying data is stored locally or in a remote service.
  • Arvados should be able to move data from one storage system to another without disrupting users. For example, the portable_data_hash[1] of a collection must not change when the underlying data moves.
  • It should be possible for a collection to reference some data stored in remote service A, some data stored in remote service B, and some data stored on local Keep disks.

1 The portable_data_hash attribute of a Collection record is a cryptographic hash of the data. See (but note this doc page is currently out of date)

Current behavior

Currently, in order to use Arvados to work with data stored in a remote service (e.g., use it as an input to a Crunch job), a user must download it from the remote service and store it in Keep, typically using a shell VM as an intermediary.
  • curl https://... | arv-put -
  • This is inefficient. The entire dataset must be transferred from the source to the shell VM, and from there to Keep.
  • It is inconvenient. A user must figure out which data might be used in a given process, and download all of it to Keep before starting.
  • It uses a lot of storage space (in a typical use case, this approach stores two additional copies of the data on Keep disks, even though the user does not desire additional replication beyond what is provided by the remote service).
  • It is error-prone: it is easy for a user's "download and store in Keep" script to miss checking an exit code and store an incomplete dataset, and this might only be discovered much later (or not at all).


Client libraries could communicate directly with non-Keep services.
  • It would be impossible to use Arvados permission controls.
  • An N×M array of code would have to be maintained in order to support N backing services from M SDK languages.
  • The API server would have to maintain the mapping of hashes to remote data objects (and permissions for this map).
  • It would be much more difficult (or impossible) to monitor usage.
Each keepstore server could know how to communicate with each non-Keep service in use.
  • Simpler client code.
  • Artificial link between keep disk services and gateway services (they couldn't be independently scaled or shut down for maintenance).
  • External clients couldn't be given direct access to the third-party gateway services without also giving them direct access to the disk services.
  • Either the keepstore servers would have to keep their hash-to-remote-object mappings synchronized -- or the map of hash to remote service would be distributed across various servers. Either way introduces an unsuitable level of complexity: unlike in a native keepstore system, the underlying data is expected to change over time.
  • When encountering an error (notably 404), client code would make many redundant attempts to read from various gateway services, based on the mistaken assumption that the various services have different sets of available data blocks.

High level design

Clients interact with remote services through Keep gateway services. A gateway server responds to GET requests using the same protocol as a keepstore server. Instead of reading data from a local disk, though, it reads data from a remote service. Generally this means it maintains a local database mapping Keep locators (hashes) to remote data locators (and possibly credentials). From the client's perspective, its behaves exactly the same as any other keep service: "GET /locator" returns either an error, or a data block whose MD5 hex digest is equal to the first 32 characters of the locator.

This means tools (see Keep S3 gateway) can create manifests with +Kuuid hints, referencing data in remote storage services by indicating the UUID of a storage gateway capable of accessing it.

Each client library, when encountering a locator with a +Kuuid hint, skips the usual rendezvous hashing algorithm. Instead of requesting a list of available services from the API server and sorting them in rendezvous order, it requests the particulars for the one specified service, and connects to it in order to retrieve the data.

Aside from the choice of which Keep service to contact, the form and semantics of the "retrieve data" transaction are unchanged.


Detailed design

With the existing client libraries, when a client reads a data block referenced by a manifest, it requests a list of "available keep services" from the API server, uses the rendezvous hashing algorithm to sort them, and contacts them in sorted order until the data is found.

Current client libraries recognize hints of the form +Kzzzzz (where zzzzz is an Arvados site prefix), indicating that the data should be retrieved from the Keep proxy service at a remote Arvados instance.

Service hints extend this approach by allowing the manifest to specify a Keep service endpoint for a data block.

A block locator provided by the API server in a manifest might have a hint of the form +Kuuid where uuid is the UUID of a keep service. In order to retrieve the block data, the client should look up the keep service with the given UUID, and perform an HTTP GET request at the appropriate host and port.

  • Given acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt-bi6l4-20fty0xbp8l9wwe,
    • Retrieve to determine scheme, host, port
    • Retrieve data from {scheme}://{host}{port}/acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt-bi6l4-20fty0xbp8l9wwe

As before, if a hint of the form +K{prefix} is given (where {prefix} is a string of five characters in [0-9a-z]), the client should perform a GET request at https://keep.{prefix}

  • Given acbd18db4cc2f85cedef654fccc4a4d8+3+K1h9kt,
    • Retrieve data from

As before, if neither of the above hints is present, the client should use the rendezvous hashing algorithm on the list of available Keep services.

Future work

Arvados could manage service hints actively: for example, data manager could tag blocks with S3 bucket names, and API server could load-balance S3 gateways by selecting one of several available gateway UUIDs for a given block. (This would not require any further changes in client libraries.)

Data manager could update manifests to reflect additional locations where data blocks can be retrieved: for example, +Kuuid1+Kuuid2 to signify that multiple remote gateways can retrieve the data, or +K+Kuuid1 to signify that the data is available locally and via a remote gateway. (This would require some backward-compatible changes in client libraries.)

A gateway could permit Keep clients to write to a remote service. Service hints don't exist when data is being written, so clients would need some other way to decide when to write to a gateway server instead of a regular Keep disk/proxy service.

Updated by Tom Clegg over 9 years ago · 13 revisions