Skip to content

Best way to delete millions of rows by ID

I need to delete about 2 million rows from my PG database. I have a list of IDs that I need to delete. However, any way I try to do this is taking days.

I tried putting them in a table and doing it in batches of 100. 4 days later, this is still running with only 297268 rows deleted. (I had to select 100 id’s from an ID table, delete where IN that list, delete from ids table the 100 I selected).

I tried:

DELETE FROM tbl WHERE id IN (select * from ids)

That’s taking forever, too. Hard to gauge how long, since I can’t see it’s progress till done, but the query was still running after 2 days.

Just kind of looking for the most effective way to delete from a table when I know the specific ID’s to delete, and there are millions of IDs.



It all depends …

  • Assuming no concurrent write access to involved tables or you may have to lock tables exclusively or this route may not be for you at all.

  • Delete all indexes (possibly except the ones needed for the delete itself).
    Recreate them afterwards. That’s typically much faster than incremental updates to indexes.

  • Check if you have triggers that can safely be deleted / disabled temporarily.

  • Do foreign keys reference your table? Can they be deleted? Temporarily deleted?

  • Depending on your autovacuum settings it may help to run VACUUM ANALYZE before the operation.

  • Some of the points listed in the related chapter of the manual Populating a Database may also be of use, depending on your setup.

  • If you delete large portions of the table and the rest fits into RAM, the fastest and easiest way may be this:

BEGIN; -- typically faster and safer wrapped in a single transaction

SET LOCAL temp_buffers = '1000MB'; -- enough to hold the temp table

FROM   tbl t
LEFT   JOIN del_list d USING (id)
WHERE IS NULL;      -- copy surviving rows into temporary table
-- ORDER BY ?             -- optionally order favorably while being at it

TRUNCATE tbl;             -- empty table - truncate is very fast for big tables

TABLE tmp;        -- insert back surviving rows.


This way you don’t have to recreate views, foreign keys or other depending objects. And you get a pristine (sorted) table without bloat.

Read about the temp_buffers setting in the manual. This method is fast as long as the table fits into memory, or at least most of it. The transaction wrapper defends against losing data if your server crashes in the middle of this operation.

Run VACUUM ANALYZE afterwards. Or (typically not necessary after going the TRUNCATE route) VACUUM FULL ANALYZE to bring it to minimum size (takes exclusive lock). For big tables consider the alternatives CLUSTER / pg_repack or similar:

For small tables, a simple DELETE instead of TRUNCATE is often faster:

USING  del_list d

Read the Notes section for TRUNCATE in the manual. In particular (as Pedro also pointed out in his comment):

TRUNCATE cannot be used on a table that has foreign-key references from other tables, unless all such tables are also truncated in the same command. […]


TRUNCATE will not fire any ON DELETE triggers that might exist for the tables.

User contributions licensed under: CC BY-SA
1 People found this is helpful