Project

General

Profile

Actions

Bug #20447

closed

Container table lock contention

Added by Peter Amstutz over 1 year ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Assigned To:
Category:
API
Story points:
-
Release relationship:
Auto

Description

I need to look at postgres status to see what is going on, but I have a theory:

  1. We put a "big lock" around the containers table, all write operations have to take an exclusive lock on the table (unfortunately this includes container operations that don't affect priorities, but maybe it's possible to make this) (#20240)
  2. This means all container operations now have to wait to get the lock
  3. We also added a feature whereby each time a "running containers probe" happens, it updates the "cost" on the API server (#19967)
  4. This means write operations on containers are now happening much much more frequently than just when containers change state
  5. As a result, requests involving containers are forced to wait in line, filling up the request queue and making everything slow.

On the plus side, the behavior of the dispatcher to back off when it sees 500 errors seems to be successfully keeping the system load from spiraling out of control.

This also suggests a short term fix for system load is to increase ProbeInterval.

Update:

Some supporting evidence:

  1. After Lucas adjusted ProbeInterval this morning, the concurrent requests are down.
  2. I was able to connect to the database and look at active queries. After changing ProbeInterval it is still the case that about 30%-40% of pending queries are "LOCK TABLE containers IN EXCLUSIVE mode"

Subtasks 1 (0 open1 closed)

Task #20460: Review 20447-less-table-lockingResolvedPeter Amstutz05/01/2023Actions
Actions

Also available in: Atom PDF