Bug #20447
closed
Container table lock contention
Added by Peter Amstutz over 1 year ago.
Updated over 1 year ago.
Release relationship:
Auto
Description
I need to look at postgres status to see what is going on, but I have a theory:
- We put a "big lock" around the containers table, all write operations have to take an exclusive lock on the table (unfortunately this includes container operations that don't affect priorities, but maybe it's possible to make this) (#20240)
- This means all container operations now have to wait to get the lock
- We also added a feature whereby each time a "running containers probe" happens, it updates the "cost" on the API server (#19967)
- This means write operations on containers are now happening much much more frequently than just when containers change state
- As a result, requests involving containers are forced to wait in line, filling up the request queue and making everything slow.
On the plus side, the behavior of the dispatcher to back off when it sees 500 errors seems to be successfully keeping the system load from spiraling out of control.
This also suggests a short term fix for system load is to increase ProbeInterval
.
Update:
Some supporting evidence:
- After Lucas adjusted ProbeInterval this morning, the concurrent requests are down.
- I was able to connect to the database and look at active queries. After changing ProbeInterval it is still the case that about 30%-40% of pending queries are "LOCK TABLE containers IN EXCLUSIVE mode"
- Description updated (diff)
- Description updated (diff)
- Description updated (diff)
- Description updated (diff)
- Description updated (diff)
- Description updated (diff)
- Subject changed from Container table busy to Container table lock contention
- Assigned To set to Tom Clegg
- Status changed from New to In Progress
I'm wondering if we could avoid taking the big lock on container request create/update, or at least defer it until the actual priority update happens?
I think locking the table after doing anything else that can conflict with a table lock (like "select for update") will end up causing deadlock.
I've updated the CR controller with a similar attribute whitelist, though, since name/description/etc updates don't cause cascading priority updates.
20447-less-table-locking @ b68d2d12f4dff73d371297688d84f32289c06907 -- developer-run-tests: #3625
I think the main thing here is for f4667c534 to remove the table lock in the case of cost updates, which happen frequently on every running container, whereas other container and CR updates typically happen O(1) times per container.
Tom Clegg wrote in #note-16:
I think locking the table after doing anything else that can conflict with a table lock (like "select for update") will end up causing deadlock.
I've updated the CR controller with a similar attribute whitelist, though, since name/description/etc updates don't cause cascading priority updates.
20447-less-table-locking @ b68d2d12f4dff73d371297688d84f32289c06907 -- developer-run-tests: #3625
I think the main thing here is for f4667c534 to remove the table lock in the case of cost updates, which happen frequently on every running container, whereas other container and CR updates typically happen O(1) times per container.
You are probably right. Let's merge this and we can collect more data to see if it solves the main performance issues.
- % Done changed from 0 to 100
- Status changed from In Progress to Resolved
Also available in: Atom
PDF