[API] Improve performance of large requests in parallel
|Target version:||Arvados Future Sprints|
|Velocity based estimate||-|
Attached are two files. The first is a simple Python script that uses threads to fetch the same collection object from the API server multiple times simultaneously. Currently, the collection's manifest is 75492690 bytes. The collection UUID is su92l-4zz18-wd2va9q9lnfx6ga
The log file was generated by running:
for n in 2 4 6 8; do python multi.py "$n" || break; done | tee multi.log
Simply, it shows that performance takes a noticeable dive as the number of simultaneous requests increase. The eight-thread calls never succeed; instead they raised a timeout exception. This problem just bit a real user: parallelizing over many files in this collection, the first batch of parallel tasks all failed because they all tried to fetch the collection simultaneously, and timed out waiting for an API server response. We have to improve performance here to make sure this use pattern doesn't fail.
#3 Updated by Tom Morris about 1 month ago
This has improved by over an order of magnitude(!) since 2015 which is great, but 20 seconds to fetch 75 MB still seems like an awful lot of time and a 3-4x stretch factor under an 8x load when the data should already be cached also seems out of line.
|Threads||Elapsed (2015)||Elapsed (2017)|