Project

General

Profile

Idea #21078

Updated by Peter Amstutz about 1 year ago

I need to delete 100s of TB of data from S3. 

 It seems we can submit delete requests at a pretty high rate, but "trash" operations are a bottleneck. 

 I currently have trash operations at 40 concurrent operations, and it is reports running about 60 keep operations per second. 

 In 3 hours it is able to put somewhere between 20,000 and 90,000 blocks in the trash. 

 At the current rate, it is deleting somewhere between 1 TiB to 5 TiB data on each 3 hour EmptyTrash cycle. 

 I think the concurrency rates are actually a little bit too high, the log is showing 503 errors, and since I dialed up the concurrency, it hasn't been able to return a full object index to keep-web, presumably because the list objects requests are also failing with 503 errors. 

Back