Actions
Efficient block packing for small WebDAV uploads¶
Background: Currently, when uploading a large number of small files to a collection via WebDAV, each file is stored as a separate block in Keep, which is inefficient (in terms of storage backend performance/cost, manifest size, access latency, and garbage collection performance).
Proposal: In this scenario, keep-web should occasionally repack previously uploaded files, such that after a large number of small uploads, a collection will asymptotically approach an average block size of at least 32 MiB.
Implementation outline:
Feature #22319: Add replace_segments feature to controller's CollectionUpdate API- caller provides map of {old-smallblock-segment → new-bigblock-segment}
- can be combined with replace_files and/or caller-provided manifest_text
- changes are applied after replace_files (i.e., the mapping is applied to segments that appear in the caller-provided manifest_text as well as the existing manifest)
- if any of the provided old-smallblock-segments are not referenced in the current version, don't apply any of the changes that remap to the same new-bigblock-segment (this avoids a situation where two callers concurrently compute similar-but-different repackings, the first one applies cleanly, and the second one adds a large block that is mostly unreferenced)
- filehandle method only needs to be supported when target is a dirnode (repacking a file could be useful, e.g., fuse driver, but not needed for webdav)
- traverse dir/filesystem, finding opportunities to merge small (<32MiB) blocks into larger (>=32MiB) blocks
- optionally (opts.Underutilized) merge segments from underutilized blocks into [larger] fully-utilized blocks -- note this shouldn't be used for single-directory repacking, because the unreferenced portions of blocks might be referenced by files elsewhere in the collection
- optionally (opts.CachedOnly) skip blocks that aren't in the local cache; see diskCacheProber below
- optionally (opts.Full) generate optimal repacking based on assumption that no further files will be written (we might postpone implementing this at first, since it's not needed for webdav)
- optionally (opts.DryRun) don't apply changes, just report what would happen (for tests and possibly a future Workbench feature that hints when explicit repack is advisable)
- remember which segments got remapped, so the changes can be pushed later; see Sync below
- repacking algorithm performance goal: reasonable amortized cost & reasonably well-packed collection when called after each file in a set of sequential/concurrent small file writes
- e.g., after writing 64 100-byte files, there should be fewer than 64 blocks, but the first file's data should have been rewritten far fewer than 64 times
- test suite should confirm decent performance in some pathological cases
diskCacheProber
type that allows caller to efficiently check whether a block is in local cache
- copy an existing DiskCache and change its KeepGateway changed to a gateway that fails reads/writes
- to check whether a block is in cache, ask the DiskCache to read 0 bytes
- avoids the cost of transferring any data or connecting to a backend
- edge case: this will also return true for a block that is currently being read from a backend into the cache -- this is arguably not really "in cache" and reading the data could still be slow or return a backend error, however, it should be OK to treat it as available for repacking purposes.
Update (collectionFileSystem)Sync()
to invoke replace_segments
if the collection has been repacked
- when handling a PUT request, first write the file (using replace_files); then call Repack (with CachedOnly: true) on the updated collection; then call Sync if anything was repacked
- this ensures the upload is preserved even if Repack/Sync goes badly, e.g., in a race with another update
- if another request is already running Sync on the same collection UUID, just skip it this time
Updated by Tom Clegg 1 day ago · 2 revisions