Project

General

Profile

Actions

Collection API - Performance enhancements » History » Revision 9

« Previous | Revision 9/14 (diff) | Next »
Radhika Chippada, 05/12/2015 02:34 AM


Collection API - Performance enhancements

Problem description

Currently, we are experiencing severe performance issues when working with large collections in Arvados. Below are a few scenario descriptions.

1. Fetching a large collection

Fetching a collection with large manifest text from API server results in timeout errors. This is suspected to be either the root cause or contributing largely to the other issues listed below. Several issues are reported which are the side effects of this issue: #4953, #4943, #5614, #5901, #5902

2. Collection#show in workbench

Often times, we see timeout errors in workbench when showing a collection page with large manifest text. It may be mostly due to the above listed concern about fetching the large collections. #5902, #5908

3. Create a collection by combining

Creating new collections by combining other collections or several files from a collection almost always fail when one of more of the involved collections contain large manifest texts. A few issues about this: #4943, #5614

Proposed solutions

Various operations dealing with these large manifest texts are certainly the cause of these performance issues. Sending and receiving the manifest text to and from the api server to clients, json encoding and decoding of these large manifest texts could be contributing to this performance issues. Reducing the amount of data and the number of times this data is exchanged can greatly help.

1. Fetching a large collection

  • Compress the data transferred (We recently enabled gzip compression between API and workbench)
  • Send the data in smaller chunks (?)
    • Is it possible for us to implement some form of “paging” strategy in sending the manifest text to the clients from the API server?

2. Collection#show in workbench

Observations

Collection#show responses are profiled using rack-mini-profiler. When pointed the development environment to qr1hi api server, the following observations are made (based on about 20+ reloads of the page):

  • The most expensive operations (on average) are:
    • collections/_show_source_summary -- 30 seconds
    • collections/show (api request to get collection) -- 15 sec
      • It took on average .2 sec to parse response (json)
    • collections/_show_files -- 15 sec
    • applications/_projects_tree_menu -- 3 to 4 sec
      • For this collection, 6 requests were made to /groups each taking .2 to .5sec
  • It is also observed that the requests took an average of 120 seconds on May 08, probably when the server was much busier and hence cluster tuning is also called for.
  • Workbench console log ...

Proposed enhancements

  • API: Add files_count and files_size to collection data model
    • Rather than computing it for each page display, we should consider adding these into the data model and update them when manifest_text changes
  • Implement paging / scrolling in the collection#show page(?). Get “pages” of collection and display them as needed.
    • This will address the next two big ticket items (the time taken in getting the collection json from API and _show_files
    • This might also be inevitable for even larger collection than the one used in this profiling exercise
  • Avoid making multiple calls to the API server for the same data by caching or preloading data (See #5908)
    • Clicking on the Advanced tab resulted in making one more call to the API server to get the collection (which as seen above takes an average of 15 seconds or more)
    • Cache the collection and other objects in workbench and avoid making unnecessary calls to the API server (while in the same page context)
  • Show less information in the collection page (such as not linking images that are going to 404)? (See #5908)
  • Add methods in the API server (?) to get my_projects and shared_project trees in one call and hence eliminating the average 3 seconds or so lag for "each" page display

3. Create a collection by combining

Observations

Creating a new collection by combining is profiled for qr1hi-4zz18-ms5x87xf1389ldv, qr1hi-4zz18-0q225z4ktr432mg, qr1hi-4zz18-i5o4ba4mmxub69b from the project qr1hi-j7d0g-3d06b1jtiwrizqm (#4943)

  • It took about 110 seconds to generate the combined manifest text, save new collection making an API server, get API server response for save
    • The server sent the new collection, including manifest_text, after save
  • It took an additional 70 seconds to "show" the new collection
    • The workbench made yet another GET /collections/<uuid> request (about 18 seconds), even though the server just sent it after saving
    • All the other delays as listed in collection#show section above are part of this lag
  • Workbench log ...

Proposed enhancements

  • Offer an API server method that accepts the selections array (and optionally owner_uuid and name) and performs the creation of the new collection in the backend. Doing so can help as follows:
    • Using this API method, when combining entire collections: We can completely eliminate the need to fetch the manifest text for the collections in workbench. Also, workbench would no longer need to work through the combining logic and generate the manifest text for the new collection to be created. No need to do JSON decode and encode the manifest text. Lastly, it would not need to send this combined manifest text to the API server on the wire. Instead, the API server can do all these steps on the server and create the new collection and send the generated collection uuid to workbench (which will then reduce the performance issue down to collection#show issue; yay)
    • When combining selected files from within a collection: Here also, we can see significant performance improvements by eliminating need to generate the combined manifest text and sending it on wire.
  • Avoid either one of (1) sending the new collection manifest_text after save from API server to workbench or (2) workbench retrieving it from API server during show. Doing both is wasteful and depending on the size of the new collection adds 10's of seconds of delay.

4. Implement caching using a framework such as Memcache

  • One of the issues listed above (#5901) is around being able to access collection in multiple threads in parallel. Also, #5908 highlights several API requests being repeated within one page display. In fact, we have this issue in several areas of workbench implementation.
  • By implementing caching, we will be able to reduce the need to make round trip API requests to fetch these objects. Instead, we can improve performance by fetching these objects from the shared cache.
  • Question: Not sure how caching would work if / when we cache these huge collections.

Updated by Radhika Chippada almost 9 years ago · 9 revisions