I need to delete about 2 million rows from my PG database. I have a list of IDs that I need to delete. However, any way I try to do this is taking days.
I tried putting them in a table and doing it in batches of 100. 4 days later, this is still running with only 297268 rows deleted. (I had to select 100 id’s from an ID table, delete where IN that list, delete from ids table the 100 I selected).
DELETE FROM tbl WHERE id IN (select * from ids)
That’s taking forever, too. Hard to gauge how long, since I can’t see it’s progress till done, but the query was still running after 2 days.
Just kind of looking for the most effective way to delete from a table when I know the specific ID’s to delete, and there are millions of IDs.
It all depends …
Assuming no concurrent write access to involved tables or you may have to lock tables exclusively or this route may not be for you at all.
Delete all indexes (possibly except the ones needed for the delete itself).
Recreate them afterwards. That’s typically much faster than incremental updates to indexes.
Check if you have triggers that can safely be deleted / disabled temporarily.
Do foreign keys reference your table? Can they be deleted? Temporarily deleted?
Depending on your autovacuum settings it may help to run
VACUUM ANALYZEbefore the operation.
Some of the points listed in the related chapter of the manual Populating a Database may also be of use, depending on your setup.
If you delete large portions of the table and the rest fits into RAM, the fastest and easiest way may be this:
BEGIN; -- typically faster and safer wrapped in a single transaction SET LOCAL temp_buffers = '1000MB'; -- enough to hold the temp table CREATE TEMP TABLE tmp AS SELECT t.* FROM tbl t LEFT JOIN del_list d USING (id) WHERE d.id IS NULL; -- copy surviving rows into temporary table -- ORDER BY ? -- optionally order favorably while being at it TRUNCATE tbl; -- empty table - truncate is very fast for big tables INSERT INTO tbl TABLE tmp; -- insert back surviving rows. COMMIT;
This way you don’t have to recreate views, foreign keys or other depending objects. And you get a pristine (sorted) table without bloat.
Read about the
temp_buffers setting in the manual. This method is fast as long as the table fits into memory, or at least most of it. The transaction wrapper defends against losing data if your server crashes in the middle of this operation.
VACUUM ANALYZE afterwards. Or (typically not necessary after going the
VACUUM FULL ANALYZE to bring it to minimum size (takes exclusive lock). For big tables consider the alternatives
pg_repack or similar:
For small tables, a simple
DELETE instead of
TRUNCATE is often faster:
DELETE FROM tbl t USING del_list d WHERE t.id = d.id;
TRUNCATEcannot be used on a table that has foreign-key references from other tables, unless all such tables are also truncated in the same command. […]
TRUNCATEwill not fire any
ON DELETEtriggers that might exist for the tables.