Project

General

Profile

Bug #7235

Updated by Brett Smith over 8 years ago

h2. Background 

 The Python Keep client sets a 300-second timeout to complete all requests.    There are some real-world scenarios where this is too strict.    For example, purely hypothetically, an Arvados developer might be working across an ocean, tethered through a cellular network.    Everything will complete just fine, but whole 64MiB blocks might not be able to finish transferring in five minutes. 

 The functional requirement is that a user with a slow but stable connection can successfully interact with a Keep proxy.    (I am willing to let timeouts continue to serve as a performance sanity check for the not-proxy case, on the expectation that one admin has sufficient control over the entire stack there.) 

 h2. Implementation thoughts: 

 Refer * It's not clear to "libcurl me that we should be setting a completion timeout at all.    We already send TCP keepalives every 75 seconds.    With those in place, I wonder if we should let the stack below us decide if the connection options":http://curl.haxx.se/libcurl/c/curl_easy_setopt.html for details. 

 When has died. 
 * Now that we're using curl, I wonder if we should consider some of "its other connection options":http://curl.haxx.se/libcurl/c/curl_easy_setopt.html?    CURLOPT_ACCEPTTIMEOUT_MS?    CURLOPT_LOW_SPEED_*? 
 * I would maybe accept an implementation that exposes timeout configuration to users, but note that it must be exposed across the Python Keep client connects to non-disk services, instead of setting TIMEOUT_MS, set LOW_SPEED_LIMIT entire stack: not just arv-get and LOW_SPEED_TIME to ensure a minimum transfer rate.    The exact transfer rate TBD by Tom. arv-put, but also higher-level tools like arv-keepdocker and arv-copy.

Back