Feature #17749

[Keep] avoid AWS S3 request limits -- add option to use more prefixes on S3

Added by Ward Vandewege 4 months ago. Updated 2 days ago.

Status:
New
Priority:
Normal
Assigned To:
Category:
-
Target version:
Start date:
Due date:
% Done:

0%

Estimated time:
(Total: 0.00 h)
Story points:
2.0

Description

AWS has a hard request limit of 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in an Amazon S3 bucket, cf. https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/.

Prefixes are defined as follows (cf. https://aws.amazon.com/premiumsupport/knowledge-center/s3-prefix-nested-folders-difference/):

  A prefix is the complete path in front of the object name, which includes the bucket name. For example,
 if an object (123.txt) is stored as BucketName/Project/WordFiles/123.txt, the prefix is
 “BucketName/Project/WordFiles/”. If the 123.txt file is saved in a bucket without a specified path, the
 prefix value is "BucketName/".

Keep currently does not store its blocks in subdirectories in the S3 buckets it uses. That means the prefix value for all blocks in a particular bucket is "BucketName/", and is subject to the request limits per bucket.

At some point, we may run into the request limits, particularly in a situation where one S3 bucket is shared along many keepstores, e.g. after #16516 is implemented.

The fix would be to use more prefixes in each S3 bucket, perhaps adopting the same pattern keepstore uses when backed by POSIX filesystems.

There is another reason to do this: buckets with very large number of blocks become slow in certain (external) tools like aws sync. Getting a list of all those files on S3-compatible storage can be slow. Having an option to make Keep on S3 use a structure like we do on POSIX disks. Add a config option, default off.

  • Config option could specify where you want the slashes. Prefix length, defaults to zero, three is recommended if you want to enable this feature on S3.
  • The migration path for an existing S3 bucket with data is out of scope (migration could be handled with a script). We could do that in a future story.
  • Will need to update both S3 drivers
  • Same logic would apply to the trash folder in this scenario
  • Will need some new tests

Subtasks

Task #18153: ReviewNewWard Vandewege


Related issues

Related to Arvados Epics - Story #16516: Run Keepstore on local compute nodesNew10/01/202111/30/2021

History

#1 Updated by Ward Vandewege 4 months ago

  • Description updated (diff)

#2 Updated by Ward Vandewege 4 months ago

  • Description updated (diff)
  • Subject changed from [Keep] investigate AWS S3 request limits to [Keep] avoid AWS S3 request limits

#3 Updated by Ward Vandewege 4 months ago

  • Related to Story #16516: Run Keepstore on local compute nodes added

#4 Updated by Ward Vandewege 4 months ago

  • Description updated (diff)

#5 Updated by Ward Vandewege 3 months ago

  • Description updated (diff)

#6 Updated by Peter Amstutz 2 months ago

  • Target version deleted (To Be Groomed)

#7 Updated by Ward Vandewege about 2 months ago

  • Description updated (diff)

#8 Updated by Ward Vandewege about 2 months ago

  • Subject changed from [Keep] avoid AWS S3 request limits to [Keep] avoid AWS S3 request limits -- add option to use more prefixes on S3

#9 Updated by Ward Vandewege about 2 months ago

  • Story points set to 2.0
  • Description updated (diff)

#10 Updated by Ward Vandewege about 2 months ago

  • Target version set to 2021-09-01 sprint

#11 Updated by Peter Amstutz about 1 month ago

  • Target version changed from 2021-09-01 sprint to 2021-09-15 sprint

#12 Updated by Peter Amstutz 16 days ago

  • Target version changed from 2021-09-15 sprint to 2021-09-29 sprint

#13 Updated by Tom Clegg 2 days ago

  • Assigned To set to Tom Clegg

Also available in: Atom PDF