Federated collections » History » Version 7

Version 6 (Peter Amstutz, 08/01/2018 08:28 PM) → Version 7/9 (Tom Clegg, 08/09/2018 07:03 PM)

h1. Federated collections

In a federation, a client on cluster A can read a collection that is hosted on cluster B. Cluster A pulls the metadata and file content from cluster B as needed. The client's behavior is exactly the same as it is for collections hosted on cluster A.

Cases:
* Read Fetch collection record by uuid
** use federated record retrieval strategy, already developed.
* Read Fetch collection record by pdh PDH
** No location hint. Distribute request to all federated clusters and pick one to return.
** Read-only, only need to support GET operation
* Update collection Can cache result by uuid (not covered here yet; needs PDH.

Record will have
a strategy manifest with signed blocks. However these blocks will be signed for writing the data through origin cluster.

Client needs
to the be able to fetch blocks from remote cluster) cluster.

h2. Differences from federated workflow retrieval arvados-controller could add block hints, using existing feature in the Python and Go SDK:

If * Blocks in a manifest can include a hint in the collection is form "+K@zzzzz". Python SDK will attempt to fetch the block from "https://keep.zzzzz.arvadosapi.com/"
** Must conform to a particular naming DNS scheme.
** Could be generalized by looking up in "remote_hosts" and using the "keep_services.accessible" API.
** Every block will be
requested from cluster A with @GET /arvados/v1/collections/{uuid}@, cluster A remote every time, because client is contacting remote server directly, limited opportunity for edge caching.

* Hint
can proxy also be a uuid of a "local gateway service". This is instructs client to use a specific service from the keep_services table (indicated as "service_type" of "gateway:")
** Direct requests through a specific service
** Does not encode which remote cluster to pull a block from.
** Gateway service could search for blocks by sending
request to every federated cluster B, using
** Gateway service can cache blocks so they don't need to be re-fetched from remote.

Both "hint" schemes are slightly inelegant because they require repeating
the same approach used "+K@" hint for workflows ever block in #13493. the manifest.

If We probably want an architecture that makes block caching possible, even if the collection first pass implementation doesn't support it. That implies a gateway / proxy service rather than contacting the remote cluster directly (architecturally, this is requested from cluster A also more in line with @GET /arvados/v1/collections/{pdh}@, and cluster A does not have a matching collection, it can scan remote clusters until it finds one. arvados-controller design acting as an intermediary, as opposed to adding federation features in the client.)

Once h2. Proposal

Arvados-controller decorates blocks with "+K@zzzzz" hints but change
the collection is retrieved, implementation so that instead of the client also needs to read contacting the data blocks. Without some additional mechanism, this won't work: remote host, the client contacts the local keepstore servers will reject gateway service and requests the blob signatures provided block with the cluster hint and block signature (which is returned by the remote cluster).

The local gateway services requests the block from the appropriate
cluster, and they generally won't have returns the requested data anyway. result.

h2. Remote data hints

If cluster
A uses a salted token simple caching strategy would be to retrieve copy the block to local keep storage, and maintain a collection mapping from cluster B, cluster B provides the remote signature(s) to a signed manifest:

<pre>
. acbd18db4cc2f85cedef654fccc4a4d8+3+Aabcdef@12345678 0:3:foo.txt
</pre>

Cluster A propagates cluster B's
local signature. If a request comes for a block which has recently been fetched, it can issue a HEAD request to verify the signature but includes and then remember the remote cluster ID: signature.

<pre>
. acbd18db4cc2f85cedef654fccc4a4d8+3+Abbbbb-abcdef@12345678 0:3:foo.txt
</pre>
h3. Fetching collection flow:

Any keepstore service # Running on cluster A will be able aaaaa
# Client sends request
to fetch the block from cluster B: arvados-controller by PDH
* Look # arvados-controller searches local database and comes up empty.
# arvados-controller sends request for collection by PDH (with salted token) out to federated clusters
bbbbb in remote cluster list in discovery doc and ccccc
* Look up bbbbb's keepproxy address in bbbbb's discovery doc # ccccc returns result
* Fetch <code>https://{keepproxy}/acbd18db4cc2f85cedef654fccc4a4d8+3+Abcdefa@12345678</code> # arvados-controller decorates the return record with "+K@ccccc" block hints
# return record to client


h2. Remote signature hint h3. Fetching block flow:

Possible syntaxes: # client wishes to read a file
* acbd18db4cc2f85cedef654fccc4a4d8+3+Abbbbb-bcdefa@12345678 # client has signed block locator with "+K@ccccc" hint
* acbd18db4cc2f85cedef654fccc4a4d8+3+Rbbbbb-bcdefa@12345678

The chosen syntax must support having both local and remote signatures on a single locator. This can help a sophisticated (future) controller communicate securely
# client sends request to keepstore, "gateway" Keep service
# gateway keep service contacts keepproxy
on a per-block or per-collection basis, whether keepstore should skip contacting the remote cluster when returning remote data that also happens ccccc and requests block
# keepproxy on ccccc returns block content
to be stored locally. gateway
* acbd18db4cc2f85cedef654fccc4a4d8+3+Abbbbb-bcdefa@12345678+Aabcdef@12345678
* acbd18db4cc2f85cedef654fccc4a4d8+3+Rbbbbb-bcdefa@12345678+Aabcdef@12345678
# gateway returns block content to client

h2. Optimization: Data cache on cluster A h3. Fetching block, with caching:

A keepstore service on cluster A, when proxying # client wishes to read a GET file
# client has signed block locator with "+K@ccccc" hint
# client sends
request to cluster B, has some opportunities to conserve network resources: "gateway" Keep service
# Before proxying, gateway service looks up block in memory / local database
## if found,
check whether if the block exists on a local volume. If so: signature is cached
## Request a content challenge from the remote cluster to ensure the remote cluster does in fact have the data. (This can be skipped if cluster A trusts cluster B block signature isn't cached, send HEAD request to enforce data access permissions.) ccccc
## Return if the signature checks out, fetch the block from aaaaa local copy. keepstore and returns that.
## else fail (because HEAD request must have failed)
# When passing a proxied response through gateway keep service contacts keepproxy on cluster ccccc and requests block
# keepproxy on ccccc returns block content
to the client, write the data gateway
# gateway saves block
to a aaaaa local volume keep, records mapping of remote block+signature to local block+signature (could be in memory, or local database such as well, so it can be returned more efficiently next time.

h2. Optimization: Identical
sqlite)
# gateway returns block
content exists on cluster A

When proxying a "get collection by UUID" request
to cluster B, cluster A might notice that the PDH returned by cluster B matches a collection stored on cluster A. In this case, all data blocks are already stored locally: it can replace the cluster B's signatures with its own, and the client will end up reading the blocks from local volumes.

To avoid an information leak, a configuration setting can restrict this optimization to cases where the caller's token has permission to read the existing local collection.

h2. Implementation Development tasks

* #13993 [API] Fetch remote-hosted collection arvados-controller support for fetching collections records by UUID and PDH
* #13994 [Keepstore] Fetch arvados-gateway to fetch keep blocks from federated remote clusters
* Update keep client in Python and Go SDK to use arvados-gateway