I’m creating a foreign table (foo_table
) in database_a. foo_table
lives in database_b. foo_table
has an enum (bar_type
) as one of its columns. Because this enum is in database_b, the creation of the foreign table fails in database_a. database_a doesn’t understand the column type. Running the following in database_a
CREATE FOREIGN TABLE foo_table (id integer NOT NULL, bar bar_type) SERVER database_b
One gets the error:
ERROR: type "bar_type" does not exist
I could just create a copy of bar_type
in database_a, but this feels duplicative and possibly a future cause of inconsistency. Would anyone have thoughts on best practices for handling?
Advertisement
Answer
I summarize the answer I received from the pgsql-general mailing list:
- A foreign table is basically an ad-hoc remote data source for the local database, so the onus is on the local database to maintain its definition of the remote table, whether it’s in another (or even the same) PostgreSQL server or a completely different data source, especially as the local definition can be different from the remote one.
- This does mean that there’s no simple way of ensuring any remote dependencies are present on the local server. PostgreSQL 9.5 will provide the IMPORT FOREIGN SCHEMA command, however this is limited to table/view definitions.
- If the definition of enum becomes inconsistant, we’d expect errors to occur when retrieving rows that had values not known on the local side.
- One potential way around this is to declare the foreign table’s columns as “text” rather than enums; you would lose some error checking on the local side, but the remote server would enforce validity whenever you stored something.
- However, not clear if this hack would to behave desirably for WHERE conditions on the enum column. I will be checking the performance for WHERE conditions and will update this answer when I have more details.
All credit goes to the fine persons on the pgsql-general mailing list.