Efficient block packing for small WebDAV uploads » History » Version 2
Tom Clegg, 11/13/2024 08:03 PM
1 | 1 | Tom Clegg | h1. Efficient block packing for small WebDAV uploads |
---|---|---|---|
2 | |||
3 | Background: Currently, when uploading a large number of small files to a collection via WebDAV, each file is stored as a separate block in Keep, which is inefficient (in terms of storage backend performance/cost, manifest size, access latency, and garbage collection performance). |
||
4 | |||
5 | Proposal: In this scenario, keep-web should occasionally repack previously uploaded files, such that after a large number of small uploads, a collection will asymptotically approach an average block size of at least 32 MiB. |
||
6 | |||
7 | Implementation outline: |
||
8 | |||
9 | |||
10 | 2 | Tom Clegg | ##22319 |
11 | 1 | Tom Clegg | * caller provides map of {old-smallblock-segment → new-bigblock-segment} |
12 | * can be combined with replace_files and/or caller-provided manifest_text |
||
13 | * changes are applied after replace_files (i.e., the mapping is applied to segments that appear in the caller-provided manifest_text as well as the existing manifest) |
||
14 | * if any of the provided old-smallblock-segments are not referenced in the current version, don't apply any of the changes that remap to the same new-bigblock-segment (this avoids a situation where two callers concurrently compute similar-but-different repackings, the first one applies cleanly, and the second one adds a large block that is mostly unreferenced) |
||
15 | |||
16 | 2 | Tom Clegg | ##22320 |
17 | 1 | Tom Clegg | * filehandle method only needs to be supported when target is a dirnode (repacking a file could be useful, e.g., fuse driver, but not needed for webdav) |
18 | * traverse dir/filesystem, finding opportunities to merge small (<32MiB) blocks into larger (>=32MiB) blocks |
||
19 | * optionally (opts.Underutilized) merge segments from underutilized blocks into [larger] fully-utilized blocks -- note this shouldn't be used for single-directory repacking, because the unreferenced portions of blocks might be referenced by files elsewhere in the collection |
||
20 | * optionally (opts.CachedOnly) skip blocks that aren't in the local cache; see diskCacheProber below |
||
21 | * optionally (opts.Full) generate optimal repacking based on assumption that no further files will be written (we might postpone implementing this at first, since it's not needed for webdav) |
||
22 | * optionally (opts.DryRun) don't apply changes, just report what would happen (for tests and possibly a future Workbench feature that hints when explicit repack is advisable) |
||
23 | * remember which segments got remapped, so the changes can be pushed later; see Sync below |
||
24 | * repacking algorithm performance goal: reasonable amortized cost & reasonably well-packed collection when called after each file in a set of sequential/concurrent small file writes |
||
25 | ** e.g., after writing 64 100-byte files, there should be fewer than 64 blocks, but the first file's data should have been rewritten far fewer than 64 times |
||
26 | ** test suite should confirm decent performance in some pathological cases |
||
27 | |||
28 | Add @diskCacheProber@ type that allows caller to efficiently check whether a block is in local cache |
||
29 | * copy an existing DiskCache and change its KeepGateway changed to a gateway that fails reads/writes |
||
30 | * to check whether a block is in cache, ask the DiskCache to read 0 bytes |
||
31 | * avoids the cost of transferring any data or connecting to a backend |
||
32 | * edge case: this will also return true for a block that is currently being read from a backend into the cache -- this is arguably not really "in cache" and reading the data could still be slow or return a backend error, however, it should be OK to treat it as available for repacking purposes. |
||
33 | |||
34 | Update @(collectionFileSystem)Sync()@ to invoke @replace_segments@ if the collection has been repacked |
||
35 | |||
36 | 2 | Tom Clegg | ##22321 |
37 | 1 | Tom Clegg | * when handling a PUT request, first write the file (using replace_files); then call Repack (with CachedOnly: true) on the updated collection; then call Sync if anything was repacked |
38 | * this ensures the upload is preserved even if Repack/Sync goes badly, e.g., in a race with another update |
||
39 | * if another request is already running Sync on the same collection UUID, just skip it this time |