I’m trying to write a trigger for my table. This table has 50 columns and 3 of them are timestamp
type. In the future I will insert new rows and they can be duplicate of existing, so I need to compute hash of each row. My idea is to compute row’s hash in each insertion and check it’s existing, that’s why I’m writing trigger. I’d like to compute hash and write it to my main table to the last column (I created it when create table).
I have one problem – I need to compute hash not for whole row, I shouldn’t use 3 columns with timestamp type (for hashing of rows I should exclude 3 columns).
I’ve just started doing it and faced a problem – I don’t know how to exclude these columns for hashing.
CREATE OR REPLACE FUNCTION check_row_hash() RETURNS TRIGGER AS $mergetrigger$ BEGIN -- As I understand I can get row's data using NEW.column_name -- But how to exclude 3 columns and get others dynamically ?? -- I can use these script for getting needed columns select column_name from user_tab_columns where table_name = 'main_table' data_type not in ('date', 'timestamp') -- But what should i do next? END; CREATE TRIGGER check_inserted_row BEFORE INSERT ON main_table for each row EXECUTE PROCEDURE check_row_hash();
Advertisement
Answer
If the column names are always the same, a redirection through a JSON value makes this a bit dynamic:
CREATE OR REPLACE FUNCTION check_row_hash() RETURNS TRIGGER AS $mergetrigger$ declare l_row_data jsonb; l_row_text text; BEGIN l_row_data := to_jsonb(new) - 'updated_at' - 'created_at'; select string_agg(t.v, ',') into l_row_text from jsonb_each_text(l_row_data) as t(k,v); new.hash_value := md5(l_row_text::text); return new; END;
has_value
is the column in the target table that should store the generated hash (I used a MD5 hash).
You could make this completely dynamic by querying pg_attribute
with something along the lines of:
select attname from pg_attribute where attnum > 0 and not attisdropped and atttypid in ('date'::regtype, 'timestamp'::regtype) and attrelid = TG_RELID;
Then use the list of columns from that query to remove the keys from the JSONB value.