I have a messages
table with a few million records in it. My Rails app includes a query on most pages to count the number of unread messages
to show the user. This query – and all queries of the messages
table – is unchanged and was working fine until yesterday.
Yesterday, I created a new messages
column and ran update_all
to update the new column on each of my few million messages
records. Simple enough, and I’ve done this plenty of times before, albeit on a smaller number of records.
However, now every query to COUNT or SELECT messages
takes 30+ seconds to return. Before the migration and update_all
, it was only taking 100ms or so to do the COUNTs or SELECTs.
I’ve tried a number of things, but the queries remain super slow. I’ve tried to REINDEX messages
and I’ve tried to VACCUM messages
but neither helped significantly.
The rest of the database tables remain working as usual, but this issue with the messages
table is keeping everything running at a crawl. Does anyone have any ideas about why this happening, and what else I can try to fix it?
Advertisement
Answer
I’ve tried to
REINDEX
messages and I’ve tried toVACCUM messages
but neither helped significantly.
You missed the important ANALYZE
. Possibly in the form of VACUUM ANALYZE
. (Typically autovacuum
should take care of this, though!)
Updating all rows creates as many dead tuples in the physical relation. To free up space, it might be the rare occasion for a
VACUUM FULL ANALYZE messages;
Takes an exclusive lock on the table! And since you run counts per user a lot, you might as well use CLUSTER
:
CLUSTER messages USING that_index_sorting_by_user_id_and_the_read_flag
For fast counts per user, that appropriate (multicolumn) index is key either way. Depends on undisclosed schema & queries.
Al that said, you shouldn’t see such a dramatic drop in performance. Maybe you exhausted a resource. Like, just not enough RAM to keep the bigger indexes cached.