Keep storage tiers » History » Version 13
Tom Morris, 03/08/2019 08:49 PM
Mark more strongly as obsolete and update pointer to current version
1 | 13 | Tom Morris | h1. Keep storage tiers - OBSOLETE |
---|---|---|---|
2 | 1 | Tom Clegg | |
3 | 13 | Tom Morris | *WARNING* For historical reasons only, superseded by [[Keep Storage Classes]] and never implemented *WARNING* |
4 | 11 | Ward Vandewege | |
5 | 13 | Tom Morris | -Typically, an Arvados cluster has access to multiple storage devices with different cost/performance trade-offs.- |
6 | 1 | Tom Clegg | |
7 | Examples: |
||
8 | * Local SSD |
||
9 | * Local HDD |
||
10 | * Object storage service provided by cloud vendor |
||
11 | * Slower or less reliable object storage service provided by same cloud vendor |
||
12 | 2 | Tom Clegg | |
13 | 13 | Tom Morris | -Users should be able to specify a minimum storage tier for each collection. Arvados should ensure that every data block referenced by a collection is stored at the specified tier _or better_.- |
14 | 2 | Tom Clegg | |
15 | 13 | Tom Morris | -The cluster administrator should be able to specify a default tier, and assign a tier number to each storage device.- |
16 | 2 | Tom Clegg | |
17 | 13 | Tom Morris | -It should be possible to configure multiple storage devices at the same tier: for example, this allows blocks to be distributed more or less uniformly across several (equivalent) cloud storage buckets for performance reasons.- |
18 | 3 | Tom Clegg | |
19 | h1. Implementation (proposal) |
||
20 | |||
21 | Storage tier features (and implementation) are similar to replication-level features. |
||
22 | |||
23 | h2. Configuration |
||
24 | |||
25 | 5 | Tom Clegg | Each Keep volume has an integer parameter, "tier". Interpretation is site-specific, except that when M≤N, tier M can satisfy a requirement for tier N, i.e., smaller tier numbers are better. Some volume drivers are capable of discovering the tier number for a volume by inspecting the underlying storage device (e.g., a cloud storage bucket) but in all cases a sysadmin can specify a value. |
26 | 3 | Tom Clegg | |
27 | 5 | Tom Clegg | There is a site-wide default tier number which is used for collections that do not specify a desired tier. Typically this is tier 1. |
28 | 3 | Tom Clegg | |
29 | h2. Storing data at a non-default tier |
||
30 | |||
31 | Tools that write data to Keep should allow the caller to specify a storage tier. The desired tier is sent to Keep services as a header (X-Keep-Desired-Tier) with each write request. Keep services return an error when the data cannot be written to the requested tier (or better). |
||
32 | |||
33 | h2. Moving data between tiers |
||
34 | |||
35 | Each collection has an integer field, "tier_desired". If tier_desired is not null, all blocks referenced by the collection should be stored at the given tier (or better). |
||
36 | 1 | Tom Clegg | |
37 | Keep-balance tracks the maximum allowed tier for each block, and moves blocks between tiers as needed. The strategy is similar to fixing rendezvous probe order: if a block is stored at the wrong tier, a new copy is made at the correct tier; then, in a subsequent balancing operation, the redundant copy is detected and deleted. _This increases the danger of data loss due to races between concurrent keep-balance processes. Keep-balance should have a reliable way to detect/avoid concurrent balancing operations._ |
||
38 | 5 | Tom Clegg | |
39 | (Note: the following section uses the term "mount" to mean what the keepstore code base calls a "volume": i.e., an attachment of a storage device to a keepstore process.) |
||
40 | |||
41 | 7 | Tom Clegg | Keepstore provides APIs that let keep-balance see which blocks are stored on which mount points, and copy/delete blocks to/from specific mount points: |
42 | * Get current mounts |
||
43 | * List blocks on a specified mount |
||
44 | * Delete block from a specified mount |
||
45 | * Pull block to a specified mount |
||
46 | 3 | Tom Clegg | |
47 | 1 | Tom Clegg | h2. Reporting |
48 | |||
49 | After each rebalance operation, keep-balance logs a summary of discrepancies between actual and desired allocation of blocks to storage tiers. Examples: |
||
50 | * N blocks (M bytes) are stored at tier 3 but are referenced by collections at tier 2. |
||
51 | * N blocks (M bytes) are stored at tier 1 but are not referenced by any collections at tier T<2. |
||
52 | 7 | Tom Clegg | |
53 | h1. Development tasks |
||
54 | |||
55 | * #11644 keepstore: mount-oriented APIs |
||
56 | * #11645 keepstore: configurable tier per volume/mount |
||
57 | * #11646 keepstore: support x-keep-desired-tier header |
||
58 | * apiserver: collections.tier_desired column, site default tier |
||
59 | * keep-balance: report storage tier equilibrium |