Project

General

Profile

Keep storage tiers » History » Revision 4

Revision 3 (Tom Clegg, 04/28/2017 03:43 PM) → Revision 4/13 (Tom Clegg, 05/01/2017 06:13 PM)

h1. Keep storage tiers 

 Typically, an Arvados cluster has access to multiple storage devices with different cost/performance trade-offs. 

 Examples: 
 * Local SSD 
 * Local HDD 
 * Object storage service provided by cloud vendor 
 * Slower or less reliable object storage service provided by same cloud vendor 

 Users should be able to specify a minimum storage tier for each collection. Arvados should ensure that every data block referenced by a collection is stored at the specified tier _or better_. 

 The cluster administrator should be able to specify a default tier, and assign a tier number to each storage device. 

 It should be possible to configure multiple storage devices at the same tier: for example, this allows blocks to be distributed more or less uniformly across several (equivalent) cloud storage buckets for performance reasons. 

 h1. Implementation (proposal) 

 Storage tier features (and implementation) are similar to replication-level features. 

 h2. Configuration 

 Each Keep volume has an integer parameter, "tier". Interpretation is site-specific, except that when M≤N, tier M can satisfy a requirement for tier N, i.e., smaller tier numbers are better. 

 There is a site-wide default tier number which is used for collections that do not specify a desired tier. 

 h2. Storing data at a non-default tier 

 Tools that write data to Keep should allow the caller to specify a storage tier. The desired tier is sent to Keep services as a header (X-Keep-Desired-Tier) with each write request. Keep services return an error when the data cannot be written to the requested tier (or better). 

 h2. Moving data between tiers 

 Each collection has an integer field, "tier_desired". If tier_desired is not null, all blocks referenced by the collection should be stored at the given tier (or better). 

 Keep-balance tracks the maximum allowed tier for each block, and moves blocks between tiers as needed. The strategy is similar to fixing rendezvous probe order: if a block is stored at the wrong tier, a new copy is made at the correct tier; then, in a subsequent balancing operation, the redundant copy is detected and deleted. _This increases the danger of data loss due to races between concurrent keep-balance processes. Keep-balance should have a reliable way to detect/avoid concurrent balancing operations._ 

 h2. Reporting 

 After each rebalance operation, keep-balance logs Each collection has an integer field, "tier_confirmed", and a summary timestamp field, "tier_confirmed_at". These indicate the most recent state of discrepancies between actual and desired allocation stored blocks: if tier_confirmed=2 then (as of blocks to storage tiers. Examples: 
 * N blocks (M bytes) are tier_confirmed_at) every block in the collection was stored at tier 3 but are referenced by collections at tier 2. 
 * N blocks (M bytes) are stored at tier 1 but are not referenced by any collections at tier T<2. 2 or better.