Skip to content
Advertisement

Updating sequence for primary key in postgres for a table with a lot of deletions

I have the following problem: I have a table with a lot of deletions and inserts of rows. Also I would like to assign to each of current rows in tables some id number. Currently I’m trying to do it with

DROP SEQUENCE IF EXISTS market_orders_seq
CREATE SEQUENCE market_orders_seq CACHE 1
CREATE TABLE market_orders (id int NOT NULL DEFAULT nextval('market_orders_seq') PRIMARY KEY, typ varchar(5), tag varchar(30), owner_id int, owner_tag varchar(5), amount int, price int, market_id int)
ALTER SEQUENCE market_orders_seq OWNED BY market_orders.id

But if I understand correctly, sequences are monotonous and can’t go down when i delete some rows, so i had encountered a problem of ids being inflated quite fast. What is an solution to this problem? I would like to use the first unused id for my inserts, but I don’t know how to do it.

Advertisement

Answer

While this is technically feasible, I would not actually recommend going this way.

First, an integer value can store values up to 2 about billions, which you probably won’t hit – and you can still switch to bigint, which may reach about 1^19.

Also, identifying the gaps requires scanning the table for each and every insert, which is inefficient (the larger the table, the less efficient).

insert into market_orders(id, typ, ...)
select
    min(id) + 1,
    'foo',
    ...
from market_orders mo
where not exists(select 1 from market_orders mo1 where mo1.id = mo + 1)

Side note: you should be using the [big]serial datatype, so you don’t need to handle the sequence by yourself.

User contributions licensed under: CC BY-SA
8 People found this is helpful
Advertisement