Expiring collections » History » Revision 21
« Previous |
Revision 21/22
(diff)
| Next »
Tom Clegg, 12/23/2016 09:18 PM
Expiring collections¶
- Table of contents
- Expiring collections
Overview¶
In addition to the two obvious states ("preserved indefinitely" and "irreversibly deleted") Arvados can offer some more subtle persistence states for collections:- An expiring collection (aka temporary, transient, scratch) has an expiry ("trash_at") time in the future, at which time it automatically moves to the trash
- A trashed collection is not visible or readable through normal data access APIs, but (until its "delete_at" time is reached) can be un-trashed by users
Significance of trash_at and delete_at¶
Each collection has a trash_at field and a delete_at field.
trash_at | delete_at | get (pdh or uuid) | get?include_trash=true | list | list?include_trash=true | can be modified |
null | null | yes | yes | yes | yes | yes |
future | future | yes† | yes† | yes† | yes† | yes |
past | future | no | yes‡ | no | yes | only trash_at and delete_at |
past | past | no | no | no | no | no |
† If trash_at is not null, any signatures given in a get/list response must expire before trash_at.
† Clients (notably arv-mount and Workbench) will need updates to behave appropriately when collections have a "trash" timer set -- e.g., use trash_at filters when requesting collection lists, or show visual cues for transient collections. Tools like "arv-get" and "arv keep ls" should work as usual on transient collections, although in interactive settings a warning message might be appropriate.
‡ No signatures should be given in get/list responses.
"Trashed, unrecoverable" collections are effectively deleted. Whether/when the system deletes the rows from the underlying database table is an implementation detail invisible to clients.
Updating trash_at and delete_at¶
Values of trash_at and delete_at are constrained:- If one is null, the other must be null too.
- 0 <= (delete_at - trash_at) <= api_config.max_trash_time
The arvados.v1.collections.delete API should set trash_at to now
instead of deleting the collection outright.
A client can also explicitly set/clear trash_at in arvados.v1.collections.create or arvados.v1.collections.update. The given trash_at, if not null, can be any valid timestamp. If the client provides a timestamp in the past, the server should transparently change it to the current time: this will make more sense in the logs, and ensures un-trash is possible for the duration indicated by the site-wide trashtime.
On an expired collection, setting trash_at and delete_at to null (or a future time) accomplishes "un-trash".
It is not possible to un-trash (or modify in any other way) a collection whose delete_at time has passed: an update request returns 404.
Unique name index¶
After trashing a collection named "foo", it must be possible to create a new collection named "foo" in the same project without a name collision.
Two possible approaches:
- When expiring a collection, stash the original name somewhere and change its name to something unique (e.g., incorporating uuid and timestamp).
- Convert the database index to a partial index, so names only have to be unique among non-deleted items. (Disadvantage: arv-mount will not (always) be able to use the "name" field of an expiring collection as its filename in a trash directory.)
- It may help here to add the "ensure_unique_name" feature to the "update" method (currently it is only available in "create").
User interface considerations¶
Workbench should indicate the difference between transient and permanent collections (e.g., make a visual distinction between null and non-null trash_at).
Workbench and arv-mount should provide a way to find and recover trashed collections.
Garbage collection (keep-balance) considerations¶
It should not be possible to do a series of collection operations that results in "lost" blocks. Example:- Get old collection A (with signed manifest)
- Delete old collection A
- (garbage collector runs now)
- Create new collection B (using the signed manifest from collection A)
Background: race window¶
Keep's garbage collection strategy relies on a "race window": new unreferenced data cannot be deleted, because there is necessarily a time interval between getting a signature from a Keep server (by writing the data) and using that signature to add the block to a collection.
A timestamp signature from a keepstore server means "this data will not be deleted until the given timestamp": before giving out a signature, keepstore updates the mtime of the block on disk, and (even if asked by datamanager/keep-balance) refuses to delete blocks that are too new. This means the API server can safely store a collection without checking whether the referenced data blocks actually exist: if the timestamps are current, the blocks can't have been garbage-collected.
The trash_at/delete_at behavior described here should help the API server offer a similar guarantee ("a signature expiring at time T means the data will not be deleted until T").
Collection modifications vs. consistency¶
(TODO: update to reflect above definitions of trash_at and delete_at)
In order to guarantee "permission signature timestamp T == no garbage collection until T", garbage collection must take into account blocks that were recently referenced by collections.
Aside: This guarantee is fundamentally at odds with an important admin feature, Expedited delete: an admin should have a mechanism to accelerate garbage collection. Ideally, this action can be restricted to the blocks from a specific deleted collection.
Datamanager/keep-balance can use arvados.v1.logs.index to get older versions of each manifest that has been changed or deleted recently (<= blobSignatureTTL seconds ago).
In order to accomplish "expedited delete" (without backdating or deleting log table entries, which would confuse other uses of event logs) the admin tool will need to do a focused garbage collection operation itself: it won't be enough to expire/delete the collection record right away. The most powerful/immediate variations of "expedited delete" will need to work this way anyway, though, in order to bypass the usual "do not delete blocks newer than permission TTL" restriction for a specific set of affected blocks.
Related: replication_desired=0¶
A collection with replication_desired=0 does not protect its data from garbage collection. In this sense, replication_desired=0 is similar to now>delete_at.
However, replication_desired=0 does not mean the collection record itself should be hidden. It means the collection metadata (filenames, sizes, data hashes, collection PDH) are valuable enough to keep on hand, but the data itself isn't. For example, if we delete intermediate data generated by a workflow, and find later that the same workflow now produces a different result, it would be helpful to see which of the intermediate outputs differed.
TBD¶
When deleting a project that contains expiring or persistent collections, presumably the persistent collections should be trashed, but what should their new owner_uuid be?- Proposed solution: projects themselves also need an trash_at field that works the same way.
Updated by Tom Clegg almost 8 years ago · 22 revisions