Drop indices concurrently on background updates (#18091)

Otherwise these can race with other long running queries and lock out
all other queries.

This caused problems in v1.22.0 as we added an index to `events` table
in #17948, but that got interrupted and so next time we ran the
background update we needed to delete the half-finished index. However,
that got blocked behind some long running queries and then locked other
queries out (stopping workers from even starting).
This commit is contained in:
Erik Johnston 2025-01-20 17:14:06 +00:00 committed by GitHub
parent 24c4d82aeb
commit 48db0c2d6c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 3 additions and 2 deletions

1
changelog.d/18091.bugfix Normal file
View file

@ -0,0 +1 @@
Fix rare race where on upgrade to v1.22.0 a long running database upgrade could lock out new events from being received or sent.

View file

@ -789,7 +789,7 @@ class BackgroundUpdater:
# we may already have a half-built index. Let's just drop it
# before trying to create it again.
sql = "DROP INDEX IF EXISTS %s" % (index_name,)
sql = "DROP INDEX CONCURRENTLY IF EXISTS %s" % (index_name,)
logger.debug("[SQL] %s", sql)
c.execute(sql)
@ -814,7 +814,7 @@ class BackgroundUpdater:
if replaces_index is not None:
# We drop the old index as the new index has now been created.
sql = f"DROP INDEX IF EXISTS {replaces_index}"
sql = f"DROP INDEX CONCURRENTLY IF EXISTS {replaces_index}"
logger.debug("[SQL] %s", sql)
c.execute(sql)
finally: