Feature #17749
closed[Keep] avoid AWS S3 request limits -- add option to use more prefixes on S3
Description
AWS has a hard request limit of 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in an Amazon S3 bucket, cf. https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/.
Prefixes are defined as follows (cf. https://aws.amazon.com/premiumsupport/knowledge-center/s3-prefix-nested-folders-difference/):
A prefix is the complete path in front of the object name, which includes the bucket name. For example, if an object (123.txt) is stored as BucketName/Project/WordFiles/123.txt, the prefix is “BucketName/Project/WordFiles/”. If the 123.txt file is saved in a bucket without a specified path, the prefix value is "BucketName/".
Keep currently does not store its blocks in subdirectories in the S3 buckets it uses. That means the prefix value for all blocks in a particular bucket is "BucketName/", and is subject to the request limits per bucket.
At some point, we may run into the request limits, particularly in a situation where one S3 bucket is shared along many keepstores, e.g. after #16516 is implemented.
The fix would be to use more prefixes in each S3 bucket, perhaps adopting the same pattern keepstore uses when backed by POSIX filesystems.
There is another reason to do this: buckets with very large number of blocks become slow in certain (external) tools like aws sync. Getting a list of all those files on S3-compatible storage can be slow. Having an option to make Keep on S3 use a structure like we do on POSIX disks. Add a config option, default off.
- Config option could specify where you want the slashes. Prefix length, defaults to zero, three is recommended if you want to enable this feature on S3.
- The migration path for an existing S3 bucket with data is out of scope (migration could be handled with a script). We could do that in a future story.
- Will need to update both S3 drivers
- Same logic would apply to the trash folder in this scenario
- Will need some new tests
Updated by Ward Vandewege over 3 years ago
- Description updated (diff)
- Subject changed from [Keep] investigate AWS S3 request limits to [Keep] avoid AWS S3 request limits
Updated by Ward Vandewege over 3 years ago
- Related to Idea #16516: Run Keepstore on local compute nodes added
Updated by Peter Amstutz over 3 years ago
- Target version deleted (
To Be Groomed)
Updated by Ward Vandewege over 3 years ago
- Subject changed from [Keep] avoid AWS S3 request limits to [Keep] avoid AWS S3 request limits -- add option to use more prefixes on S3
Updated by Ward Vandewege over 3 years ago
- Story points set to 2.0
- Description updated (diff)
Updated by Ward Vandewege over 3 years ago
- Target version set to 2021-09-01 sprint
Updated by Peter Amstutz over 3 years ago
- Target version changed from 2021-09-01 sprint to 2021-09-15 sprint
Updated by Peter Amstutz over 3 years ago
- Target version changed from 2021-09-15 sprint to 2021-09-29 sprint
Updated by Tom Clegg over 3 years ago
- Status changed from New to In Progress
- Category set to Keep
I'm not sure I've made the docs clear enough, particularly the part about when/why not to change PrefixLength.
17749-s3-prefixes @ adccfe35ccc68a865a2fd2356ca2b81e0366a4b4 -- developer-run-tests: #2699
Updated by Ward Vandewege about 3 years ago
Tom Clegg wrote:
I'm not sure I've made the docs clear enough, particularly the part about when/why not to change PrefixLength.
17749-s3-prefixes @ adccfe35ccc68a865a2fd2356ca2b81e0366a4b4 -- developer-run-tests: #2699
LGTM, thanks!
Updated by Tom Clegg about 3 years ago
- Status changed from In Progress to Resolved
Applied in changeset arvados-private:commit:arvados|5dbf72803717f58b4848b6a6490375450916e84d.