Skip to content
Advertisement

Can’t alter column type to a shorter VARCHAR in Redshift. Getting “target column size should be greater or equal to current maximum column size”

I’m trying to run

alter table schema_name.table_name
    ALTER COLUMN column_name TYPE varchar(256) 

in Amazon Redshift, but I’m getting this error:

SQL Error [500310] [0A000]: Amazon Invalid operation: cannot alter column “column_name” of relation “table_name”, target column size 256 should be greater or equal to current maximum column size 879;

I’ve already tried

update schema_name.table_name
    set column_name = CAST(column_name AS VARCHAR(256))

and

update schema_name.table_name
    set column_name = SUBSTRING(column_name, 1, 256)

in order to reduce this maximum column size of 879, but I still get the same error. I know I can work around it by creating a new VARCHAR(256) column with the same data, but is there another way?

Advertisement

Answer

I believe the answer is ‘No, you cannot reduce a varchar column length’. You need to copy the data somewhere else (new table, new column) to reduce the size. Redshift doesn’t know if the old data can fit in the new size w/o trying to perform such a copy itself.

You can create a new column of the size you want, update the table to put the data from the old column into this new column (reducing string size if necessary), rename the old column to something unique, rename the new column the same as the old column name, and then drop the unique named column. You will have the same column names as before but the max length of the column in question will be reduced. However the default column order will be different.

User contributions licensed under: CC BY-SA
6 People found this is helpful
Advertisement