Bug #20447
closedContainer table lock contention
Description
I need to look at postgres status to see what is going on, but I have a theory:
- We put a "big lock" around the containers table, all write operations have to take an exclusive lock on the table (unfortunately this includes container operations that don't affect priorities, but maybe it's possible to make this) (#20240)
- This means all container operations now have to wait to get the lock
- We also added a feature whereby each time a "running containers probe" happens, it updates the "cost" on the API server (#19967)
- This means write operations on containers are now happening much much more frequently than just when containers change state
- As a result, requests involving containers are forced to wait in line, filling up the request queue and making everything slow.
On the plus side, the behavior of the dispatcher to back off when it sees 500 errors seems to be successfully keeping the system load from spiraling out of control.
This also suggests a short term fix for system load is to increase ProbeInterval
.
Update:
Some supporting evidence:
- After Lucas adjusted ProbeInterval this morning, the concurrent requests are down.
- I was able to connect to the database and look at active queries. After changing ProbeInterval it is still the case that about 30%-40% of pending queries are "LOCK TABLE containers IN EXCLUSIVE mode"
Updated by Peter Amstutz over 1 year ago
- Subject changed from Container table busy to Container table lock contention
Updated by Tom Clegg over 1 year ago
20447-less-table-locking @ f4667c5346cff5c91c6e75476d624c974c4857f0 -- developer-run-tests: #3624
wb1 retry developer-run-tests-apps-workbench-integration: #3911
Updated by Peter Amstutz over 1 year ago
I'm wondering if we could avoid taking the big lock on container request create/update, or at least defer it until the actual priority update happens?
Updated by Tom Clegg over 1 year ago
I think locking the table after doing anything else that can conflict with a table lock (like "select for update") will end up causing deadlock.
I've updated the CR controller with a similar attribute whitelist, though, since name/description/etc updates don't cause cascading priority updates.
20447-less-table-locking @ b68d2d12f4dff73d371297688d84f32289c06907 -- developer-run-tests: #3625
I think the main thing here is for f4667c534 to remove the table lock in the case of cost updates, which happen frequently on every running container, whereas other container and CR updates typically happen O(1) times per container.
Updated by Peter Amstutz over 1 year ago
Tom Clegg wrote in #note-16:
I think locking the table after doing anything else that can conflict with a table lock (like "select for update") will end up causing deadlock.
I've updated the CR controller with a similar attribute whitelist, though, since name/description/etc updates don't cause cascading priority updates.
20447-less-table-locking @ b68d2d12f4dff73d371297688d84f32289c06907 -- developer-run-tests: #3625
I think the main thing here is for f4667c534 to remove the table lock in the case of cost updates, which happen frequently on every running container, whereas other container and CR updates typically happen O(1) times per container.
You are probably right. Let's merge this and we can collect more data to see if it solves the main performance issues.
Updated by Tom Clegg over 1 year ago
- % Done changed from 0 to 100
- Status changed from In Progress to Resolved
Applied in changeset arvados|a1df219027bce409d2ad659b6033f5d76fb540ea.