[API] container_requests#update alternately responds 422 or 404 for no apparent reason
|Assignee:||Nico César||% Done:|
|Target version:||Arvados Future Sprints|
|Story points||-||Remaining (hours)||0.00 hour|
|Velocity based estimate||-|
Seems like a race condition -- perhaps related to permission lookups or other container request updates that are happening at the ~same time?
#8 Updated by Nico César about 1 month ago
update the clusters with Carlos next weeks and after all updates review the status with him ... there is a pipeline of several incoming changes on their clusters. as follows:
#10111: Provenance graph for container requests - merged - and latest version isn't running in our clusters
#10645: Better display of command / inputs for container requests - merged - and not running in our clusters
RT 321 / #11469: Use temp space outside container - merged - and not running in our clusters
#11626: Propagate slurm errors so they are visible in workbench - in review
RT 355: Efficient loading & rendering of many container requests - in progress
#10112: Improve display of workflow - in progress
-- Also slurm integration improvements for requesting the correct amount of memory and disk.