Project

General

Profile

Keep Proxy Specification » History » Revision 9

Revision 8 (Peter Amstutz, 04/28/2014 12:43 PM) → Revision 9/14 (Peter Amstutz, 04/28/2014 12:44 PM)

h1. Reverse Keep Proxy 

 h2. Problem 

 Need to be able to automatically upload huge (+1 TiB) datasets into Arvados.    Current proposed solution is to upload the data to a staging area and then put the data into Keep.    On further consideration, this solution is inadequate for a number of reasons: 
 * Must set aside staging area big enough to accommodate large uploads. 
 * When uploads are not occurring, this empty space just sits around, costing money. 
 * Amazon has a 1 TiB limit on EBS volumes, which means we can't accept +1 TiB datasets, unless we create a volume-spanning partitions 
 * Multiple users uploading to the same staging partition can end up in a starvation deadlock when if the volume fills up. 
 * Some of these problems could be addressed by allocating/deallocating volumes on the fly, but this adds significant complexity. 
 * Once data is uploaded, it still needs to be copied into Keep, which adds additional wait time from when the data is uploaded to when the data is actually ready to use. 

 h2. Solution 

 Provide a Keep client that sends blocks to a reverse Keep proxy, which forwards the blocks to appropriate internal Keep servers.   
 * Doesn't require staging except in RAM of the Keep proxy. 
 * No dataset limits except Keep's overall capacity 
 * Fewer contention problems (although many uploaders could overwhelm the proxy node...) 
 * Data is available immediately once upload is completed 
 * This is the right thing to do in the long term anyway.    We shouldn't waste our time with messy hacks. 

 h2. Approach 

 # Develop a subset Arvados Go SDK that supports accessing API server and can write to Keep server (reading from Keep is out of scope). 
 ** Read files in 64 MiB blocks and calculate hashes 
 ** Pack small files into a single block 
 ** Put 64 MiB blocks to Keep server over HTTPS 
 ** Create manifest (should be normalized form) 
 ** Write manifest to Keep 
 ** Use Google API client to talk to API server to create collection, metadata links 
 # Develop uploader program in Go to recursively upload a directory structure 
 ** Take API server, API token, directory path on the command line (+ additional metadata links to set on the collection after it is completed) 
 ** Should be self-contained static x64 ELF binary with minimal dependencies that will run on any modern x64 Linux. 
 ** Use Go Keep client library to upload blocks, create manifest, upload manifest to API server, add metadata links. 
 ** Should checkpoint during upload so that upload can be canceled and resumed. 
 # Reverse Keep Proxy 
 ** Publicly accessible head node providing write access into Keep (read access is out of scope for this task) 
 ** List proxy contact info node in discovery document 
 ** Check API token to ensure client has permission to write 
 ** Accept blocks from client, forward them to internal Keep cluster.    Extend existing Keep Go server by writing a new volume backend that writes to the appropriate internal Keep servers instead of to the disk. 
 ** Hash and user account associated with each upload block logged to API server 
 ** Writing to internal Keep servers and API server will use Arvados Go SDK 
 # API server 
 ** API call allowing normal users to create of special user accounts that use a combination of limited permissions and scopes to restrict to uploading tasks.    Scopes alone are not powerful enough because a scope cannot restrict the uploader to only creating links about collections known to the uploader. 
 ** Restricted to a few tasks, such as creating collections, creating metadata links about that collection. 
 ** Restricted account is owned by the Arvados user, so user can see and change everything the uploader account owns. 
 ** Can deactivate uploader account when done with it.