Collection API - Performance enhancements » History » Revision 5

« Previous | Revision 5/14 (diff) | Next »
Radhika Chippada, 05/11/2015 07:38 PM

Collection API - Performance enhancements

Problem description

Currently, we are experiencing severe performance issues when working with large collections in Arvados. Below are a few scenario descriptions.

1. Fetching a large collection

Fetching a collection with large manifest text from API server results in timeout errors. This is suspected to be either the root cause or contributing largely to the other issues listed below. Several issues are reported which are the side effects of this issue: #4953, #4943, #5614, #5901, #5902

2. Collection#show in workbench

Often times, we see timeout errors in workbench when showing a collection page with large manifest text. It may be mostly due to the above listed concern about fetching the large collections. #5902, #5908

3. Create a collection by combining

Creating new collections by combining other collections or several files from a collection almost always fail when one of more of the involved collections contain large manifest texts. A few issues about this: #4943, #5614

Proposed solutions

Various operations dealing with these large manifest texts are certainly the cause of these performance issues. Sending and receiving the manifest text to and from the api server to clients, json encoding and decoding of these large manifest texts could be contributing to this performance issues. Reducing the amount of data and the number of times this data is exchanged can greatly help.

1. Fetching a large collection

  • Compress the data transferred (We recently enabled gzip compression between API and workbench)
  • Send the data in smaller chunks (?)
    • Is it possible for us to implement some form of “paging” strategy in sending the manifest text to the clients from the API server?

2. Collection#show in workbench

Collection#show responses are profiled using rack-mini-profiler. When pointed the development environment to qr1hi api server, the following observations are made (based on about 20+ reloads of the page):

  • The most expensive operations (on average) are:
    • collections/_show_source_summary -- 30 seconds
    • collections/show (api request to get collection) -- 15 sec
      • It took on average .2 sec to parse response (json)
    • collections/_show_files -- 15 sec
    • applications/_projects_tree_menu -- 3 to 4 sec
      • For this collection, 6 requests were made to /groups each taking .2 to .5sec
  • It is also observed that the requests took an average of 120 seconds on May 08, probably when the server was much busier and hence cluster tuning is also called for.

Performance profile snapshot ...

Workbench console log ...

  • Implement paging (?) in the collection#show? Get “pages” of collection and display them as needed.
  • Avoid making multiple calls to the API server for the same data by caching or preloading data (See #5908)
  • Show less information in the collection page (such as not linking images that are going to 404)? (See #5908)

3. Create a collection by combining

  • Offer an API server method that accepts the selections array (and optionally owner_uuid and name) and performs the creation of the new collection in the backend. Doing so can help as follows:
    • When combining entire collections: We can completely eliminate the need to fetch the manifest text for the collections in workbench. Also, workbench would no longer need to work through the combining logic and generate the manifest text for the new collection to be created. No need to do JSON decode and encode the manifest text. Lastly, it would not need to send this manifest text to the API server on the wire. Instead, the API server can do all these steps on the server and create the new collection and send the generated collection uuid to workbench (which will then reduce the performance issue down to collection#show issue; yay)
    • When combining selected files from within a collection: Here also, we can see significant performance improvements by eliminating need to generate the combined manifest text and sending it on wire.

4. Implement caching using a framework such as Memcache

  • One of the issues listed above (#5901) is around being able to access collection in multiple threads in parallel. Also, #5908 highlights several API requests being repeated within one page display. In fact, we have this issue in several areas of workbench implementation.
  • By implementing caching, we will be able to reduce the need to make round trip API requests to fetch these objects. Instead, we can improve performance by fetching these objects from the shared cache.
  • Question: Not sure how caching would work if / when we cache these huge collections.

Updated by Radhika Chippada almost 9 years ago · 5 revisions