Feature #18961
closedGo FileSystem / FUSE mount supports block prefetch
Added by Peter Amstutz over 2 years ago. Updated 8 months ago.
Description
Use the following strategy for prefetch:
When a read happens on a file, look at the next N blocks the make up the manifest stream and issue prefetch requests for those blocks. These blocks get loaded into the cache so they are ready to go when they are needed.
By looking ahead in the stream rather than just the file, this also works for manifests containing small files which are stored as 1 block per file.
There should be a config knob to control how much data or blocks are prefetched so that sites can experiment with optimal throughput.
This implies the cache behavior needs to support pre-fetch, which means pre-fetched blocks should not push out actively used blocks but should be able to push out less recently used blocks. Plain LRU behavior where a block is promoted each time it is accessed may be sufficient but metrics will be helpful.
Files
download-speed-1.8-GB-file.png (11.6 KB) download-speed-1.8-GB-file.png | Tom Clegg, 02/20/2024 02:48 PM | ||
download-speed-1.8-GB-file-2.png (29.7 KB) download-speed-1.8-GB-file-2.png | Tom Clegg, 02/26/2024 02:54 PM | ||
download-speed-1800-1MB-files.png (22.8 KB) download-speed-1800-1MB-files.png | Tom Clegg, 02/26/2024 08:03 PM | ||
download-speed-1.8-GB-file-4.png (24.8 KB) download-speed-1.8-GB-file-4.png | Tom Clegg, 02/26/2024 09:26 PM | ||
smol.png (20.1 KB) smol.png | Tom Clegg, 02/27/2024 02:55 AM | ||
smol2.png (21.2 KB) smol2.png | Tom Clegg, 02/27/2024 03:40 PM | ||
smol3.png (22.1 KB) smol3.png | Tom Clegg, 02/27/2024 04:21 PM | ||
smol4.png (30.5 KB) smol4.png | Tom Clegg, 02/27/2024 05:45 PM |
Updated by Peter Amstutz over 2 years ago
- Related to Idea #17849: FUSE driver v2 added
Updated by Peter Amstutz over 2 years ago
- Target version changed from 2022-05-11 sprint to 2022-05-25 sprint
Updated by Peter Amstutz over 2 years ago
- Target version deleted (
2022-05-25 sprint)
Updated by Peter Amstutz over 1 year ago
- Release deleted (
60) - Subject changed from Go FileSystem / FUSE mount supports block caching & prefetch to Go FileSystem / FUSE mount supports block prefetch
Updated by Peter Amstutz over 1 year ago
- Story points set to 2.0
- Target version changed from Future to To be scheduled
- Description updated (diff)
Updated by Peter Amstutz over 1 year ago
- Target version changed from To be scheduled to Development 2023-05-10 sprint
Updated by Peter Amstutz over 1 year ago
- Target version changed from Development 2023-05-10 sprint to Development 2023-05-24 sprint
Updated by Peter Amstutz over 1 year ago
- Target version changed from Development 2023-05-24 sprint to Development 2023-06-07
Updated by Peter Amstutz over 1 year ago
- Target version changed from Development 2023-06-07 to Development 2023-06-21 sprint
Updated by Peter Amstutz over 1 year ago
- Target version changed from Development 2023-06-21 sprint to To be scheduled
Updated by Peter Amstutz over 1 year ago
- Related to Idea #18342: Keep performance optimization added
Updated by Peter Amstutz about 1 year ago
- Target version changed from To be scheduled to Development 2023-10-25 sprint
Updated by Peter Amstutz about 1 year ago
- Target version changed from Development 2023-10-25 sprint to Development 2023-11-08 sprint
Updated by Peter Amstutz about 1 year ago
- Target version changed from Development 2023-11-08 sprint to Development 2023-11-29 sprint
Updated by Peter Amstutz about 1 year ago
- Target version changed from Development 2023-11-29 sprint to Development 2024-01-03 sprint
Updated by Peter Amstutz about 1 year ago
- Target version changed from Development 2024-01-03 sprint to Development 2024-01-17 sprint
Updated by Peter Amstutz about 1 year ago
- Target version changed from Development 2024-01-17 sprint to Development 2024-01-31 sprint
Updated by Peter Amstutz about 1 year ago
- Target version changed from Development 2024-01-31 sprint to Development 2024-02-14 sprint
Updated by Peter Amstutz about 1 year ago
- Target version changed from Development 2024-02-14 sprint to Development 2024-01-31 sprint
Updated by Peter Amstutz about 1 year ago
- Target version changed from Development 2024-01-31 sprint to Development 2024-02-14 sprint
Updated by Peter Amstutz 10 months ago
- Target version changed from Development 2024-02-14 sprint to Development 2024-02-28 sprint
Updated by Peter Amstutz 10 months ago
- Related to Feature #20995: Prefetch small files when scanning a collection directory added
Updated by Tom Clegg 10 months ago
Results of a few trials using a simplistic implementation with the easiest sample data (one big file, optimal block packing) and a small cache (big enough to accommodate pre-fetched blocks, but not big enough to retain data from one trial to the next):
(misleading chart removed)
"prefetch 0.5" starts pre-fetching the next block when the client has read 50% of the current block.
"no stream" trials use the current main version of keepstore.
"stream" trials use the unmerged version of keepstore from #2960, 62168c2db5.
Updated by Peter Amstutz 10 months ago
Also I'm a little confused how to read this, if the lines in a box-and-whisker plot are supposed to be minimum and maximum how do you have lines that don't touch the box?
Updated by Tom Clegg 10 months ago
- File download-speed-1800-1MB-files.png added
Small file download performance (using sequential curl invocations), 18961-block-prefetch @ 1dcde0921d vs. main
"prefetch 1 easy" is b1bd2898c1, with the easy prefetch implementation that only prefetches the next block in the current file / doesn't try to predict which file will be read next.
Updated by Peter Amstutz 10 months ago
This charts make more sense than the old ones, but can you verify what the boxes/whiskers represent here? Ar the whiskers the full range and the box is the 25th and 75th percentiles? Where's the mean and median?
Is the 1800x1MB test is fetching 1800 blocks or fewer blocks with sequential packing?
What is the disk cache setting?
Updated by Peter Amstutz 10 months ago
Also this is download speed per file (average) or time to download all the files overall?
Tentatively, it looks like 1 block prefetch may be slightly slower than no prefetch, but also has less variance. However I think it also depends on how many trials you did and exactly what this is measuring. I think we need to go deep into the numbers here and make sure we understand exactly what is happening.
Updated by Tom Clegg 10 months ago
The boxes show Q1 and Q3, whiskers show min and max, mean is not shown.
Y axis is overall speed for a sequence of 1800 x 1 MB downloads (i.e., 1800 ÷ clock seconds).
The disk cache is about 1.2 GB, just small enough that the cache gets turned over from one trial to the next.
The manifest is well-packed (blocks are 64 MiB).
My conclusions:- other variables (cloud weather / AWS S3's own caching?) are significant
- the "easy" version of prefetch might make small file performance slightly worse than no-prefetch
- the latest / full version of prefetch looks best (possible, but unlikely, that the other unrelated variables just happened to work in its favor)
I did another set of large file trials with the latest version. I expected it would be slightly worse if anything (the new code does more work per read), but instead it looked better and one trial did exceptionally well. More than anything else it hints that we'll need more samples / better strategy to get convincing numbers. If we're going to do that, it might make more sense to do it on a more powerful VM.
Updated by Peter Amstutz 10 months ago
I just wrote a bunch of comments and it ate them when I hit save...
What are the instance types of keep-web and the shell node where you are doing the downloading? We should probably have them both be something like m5n.large.
How many trials are you running? When you showed the data the other day, you had 5 data points. To increase confidence we should run like 20+ trials.
For the small files, a couple of thoughts:
- If I'm reading this right it is running about 1/2 to 1/3 the single file transfer rate, that makes me suspect it is being dominated by connection setup and/or TCP slow start. I'd be curious what the difference is if the same sequential download was done in a single process using a single HTTP session.
- I would like to see a test where the manifest has 1 block per file. To me, the goal of small file prefetch is to improve performance for manifests that are not packed -- so we should have numbers about how it performs in that case.
I'm also curious if less cache (600 MB instead of 1200 MB) makes any difference. Presumably it shouldn't but I think it would be a useful number.
We should also run trials where there is enough cache (2+ GB) so that we have an idea of how it performs in the best case.
Updated by Peter Amstutz 10 months ago
Tom Clegg wrote in #note-44:
using 16x concurrent curl processes (xargs -P 16)
16 concurrent curl processes could be stepping on each other. What if the client is reading each file in sequence?
What order is it reading files? Is it favorable for prefetch or counter productive?
Can we do a trial where we shuffle the access order randomly?
Also, I'd still like to see a version of this that uses a single TCP session to see if that meaningfully minimizes overhead from connection setup and TCP slow start.
Updated by Tom Clegg 10 months ago
- latest version with "small file prefetch" disabled ("easyprefetch")
- latest version with "small file prefetch" limited to the 1st segment of the next file in the stream ("prefetch-1seg")
Evidently, the initial "optimize for small files" prefetch implementation (prefetch up to 64 MiB past the current read point) performs worse in this particular "small files" scenario.
Even prefetching 1 block seems to be slightly detrimental (prefetching 2 blocks was slightly worse than 1). But perhaps it's helpful with different network/backend performance characteristics?
Updated by Brett Smith 10 months ago
I'm all for measuring things, but aren't all these performance numbers necessarily affected by disk and network performance? Which will vary across installs and applications? I'm a little wary of overoptimizing our general strategy based on the performance numbers from one specific setup.
Updated by Tom Clegg 10 months ago
For these trials I used xargs -n 16
to reduce client-side overhead. This improves the overall transfer time, but it still shows the "prefetch for small files" feature (even if limited to 1, 2, or 4 blocks after the current file) giving slightly worse performance than the simpler "prefetch for large files" feature.
My suspicion is that if the download requests don't arrive in exactly the same order they were stored in the manifest then the demand on keepweb<->keepstore<->s3 gets lumpier, and therefore more likely to be affected by network/service limits.
In that case prefetch for small files might be productive only when there is more keepstore and s3/backend capacity.
Updated by Peter Amstutz 10 months ago
What is the algorithm difference between "streaming+prefetch" and "streaming+easyprefetch" ?
If prefetch seems to be a loser, maybe we shouldn't do it at all? Have we found any cases where it clearly beats simple streaming?
Updated by Tom Clegg 10 months ago
Peter Amstutz wrote in #note-51:
What is the algorithm difference between "streaming+prefetch" and "streaming+easyprefetch" ?
streaming+easyprefetch prefetches the next blocks in the current file until it is 64 MiB ahead.
streaming+prefetch prefetches the next blocks in the current file, then the next blocks in the lexically-next file(s) in the directory, until it is 64 MiB ahead.
streaming+prefetch-Nseg prefetches the next blocks in the current file, then up to N blocks in the lexically-next file(s) in the directory, until it is 64 MiB ahead.
If prefetch seems to be a loser, maybe we shouldn't do it at all? Have we found any cases where it clearly beats simple streaming?
#note-42 suggests prefetch might increase the maximum download speed for large files.
Generally, prefetch helps when backend latency is high, but it seems like backend latency is not in fact high now that keepstore itself doesn't add a store-and-forward delay.
Perhaps a better feature is a configurable-size output buffer in keep-web, so (provided the backend throughput is faster than the client throughput, which is also the only situation in which prefetch can help) the client-side buffer drains while the backend is waiting for the next block. Besides using up some more memory, I don't think this would be worse than the current behavior in any situation.
This doesn't help small files at all, but neither does prefetch, so....
Updated by Peter Amstutz 10 months ago
- Target version changed from Development 2024-02-28 sprint to Development 2024-03-13 sprint
Updated by Tom Clegg 9 months ago
- Related to Feature #21606: configurable keep-web output buffer to reduce delay between blocks added