CREATE SOURCE: PostgreSQL

CREATE SOURCE connects Materialize to an external system you want to read data from, and provides details about how to decode and interpret that data.

Materialize supports PostgreSQL (11+) as a data source. To connect to a PostgreSQL instance, you first need to create a connection that specifies access and authentication parameters. Once created, a connection is reusable across multiple CREATE SOURCE statements.

WARNING! Before creating a PostgreSQL source, you must set up logical replication in the upstream database. For step-by-step instructions, see the integration guide for your PostgreSQL service: AlloyDB, Amazon RDS, Amazon Aurora, Azure DB, Google Cloud SQL, Self-hosted.

Syntax

CREATE SOURCE IF NOT EXISTS src_name IN CLUSTER cluster_name FROM POSTGRES CONNECTION connection_name ( PUBLICATION publication_name , TEXT COLUMNS ( column_name , ) ) FOR ALL TABLES FOR TABLES ( table_name AS subsrc_name , FOR SCHEMAS ( schema_name , ) EXPOSE PROGRESS AS progress_subsource_name with_options

with_options

WITH ( RETAIN HISTORY = FOR retention_period )
Field Use
src_name The name for the source.
IF NOT EXISTS Do nothing (except issuing a notice) if a source with the same name already exists. Default.
IN CLUSTER cluster_name The cluster to maintain this source.
CONNECTION connection_name The name of the PostgreSQL connection to use in the source. For details on creating connections, check the CREATE CONNECTION documentation page.
FOR ALL TABLES Create subsources for all tables in the publication.
FOR SCHEMAS ( schema_list ) Create subsources for specific schemas in the publication.
FOR TABLES ( table_list ) Create subsources for specific tables in the publication.
EXPOSE PROGRESS AS progress_subsource_name The name of the progress collection for the source. If this is not specified, the progress collection will be named <src_name>_progress. For more information, see Monitoring source progress.
RETAIN HISTORY FOR
retention_period
Private preview. This option has known performance or stability issues and is under active development. Duration for which Materialize retains historical data, which is useful to implement durable subscriptions. Accepts positive interval values (e.g. '1hr'). Default: 1s.

CONNECTION options

Field Value Description
PUBLICATION text Required. The PostgreSQL publication (the replication data set containing the tables to be streamed to Materialize).
TEXT COLUMNS A list of names Decode data as text for specific columns that contain PostgreSQL types that are unsupported in Materialize.

Features

Change data capture

This source uses PostgreSQL’s native replication protocol to continually ingest changes resulting from INSERT, UPDATE and DELETE operations in the upstream database — a process also known as change data capture.

For this reason, you must configure the upstream PostgreSQL database to support logical replication before creating a source in Materialize. For step-by-step instructions, see the integration guide for your PostgreSQL service: AlloyDB, Amazon RDS, Amazon Aurora, Azure DB, Google Cloud SQL, Self-hosted.

Creating a source

To avoid creating multiple replication slots in the upstream PostgreSQL database and minimize the required bandwidth, Materialize ingests the raw replication stream data for some specific set of tables in your publication.

CREATE SOURCE mz_source
  FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
  FOR ALL TABLES;

When you define a source, Materialize will automatically:

  1. Create a replication slot in the upstream PostgreSQL database (see PostgreSQL replication slots).

    The name of the replication slot created by Materialize is prefixed with materialize_ for easy identification, and can be looked up in mz_internal.mz_postgres_sources.

    SELECT id, replication_slot FROM mz_internal.mz_postgres_sources;
    
       id   |             replication_slot
    --------+----------------------------------------------
     u8     | materialize_7f8a72d0bf2a4b6e9ebc4e61ba769b71
    
  2. Create a subsource for each original table in the publication.

    SHOW SOURCES;
    
             name         |   type
    ----------------------+-----------
     mz_source            | postgres
     mz_source_progress   | progress
     table_1              | subsource
     table_2              | subsource
    

    And perform an initial, snapshot-based sync of the tables in the publication before it starts ingesting change events.

  3. Incrementally update any materialized or indexed views that depend on the source as change events stream in, as a result of INSERT, UPDATE and DELETE operations in the upstream PostgreSQL database.

It’s important to note that the schema metadata is captured when the source is initially created, and is validated against the upstream schema upon restart. If you create new tables upstream after creating a PostgreSQL source and want to replicate them to Materialize, the source must be dropped and recreated.

PostgreSQL replication slots

Each source ingests the raw replication stream data for all tables in the specified publication using a single replication slot. This allows you to minimize the performance impact on the upstream database, as well as reuse the same source across multiple materializations.

WARNING! Make sure to delete any replication slots if you stop using Materialize, or if either the Materialize or PostgreSQL instances crash. To look up the name of the replication slot created for each source, use mz_internal.mz_postgres_sources.

If you delete all objects that depend on a source without also dropping the source, the upstream replication slot will linger and continue to accumulate data so that the source can resume in the future. To avoid unbounded disk space usage, make sure to use DROP SOURCE or manually delete the replication slot.

For PostgreSQL 13+, it is recommended that you set a reasonable value for max_slot_wal_keep_size to limit the amount of storage used by replication slots.

PostgreSQL schemas

CREATE SOURCE will attempt to create each upstream table in the same schema as the source. This may lead to naming collisions if, for example, you are replicating schema1.table_1 and schema2.table_1. Use the FOR TABLES clause to provide aliases for each upstream table, in such cases, or to specify an alternative destination schema in Materialize.

CREATE SOURCE mz_source
  FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
  FOR TABLES (schema1.table_1 AS s1_table_1, schema2_table_1 AS s2_table_1);

Monitoring source progress

By default, PostgreSQL sources expose progress metadata as a subsource that you can use to monitor source ingestion progress. The name of the progress subsource can be specified when creating a source using the EXPOSE PROGRESS AS clause; otherwise, it will be named <src_name>_progress.

The following metadata is available for each source as a progress subsource:

Field Type Meaning
lsn uint8 The last Log Sequence Number (LSN) consumed from the upstream PostgreSQL replication stream.

And can be queried using:

SELECT lsn
FROM <src_name>_progress;

The reported LSN should increase as Materialize consumes new WAL records from the upstream PostgreSQL database. For more details on monitoring source ingestion progress and debugging related issues, see Troubleshooting.

Known limitations

Schema changes

Materialize supports schema changes in the upstream database as follows:

Compatible schema changes

  • Adding columns to tables. Materialize will not ingest new columns added upstream unless you use DROP SOURCE to first drop the affected subsource, and then add the table back to the source using ALTER SOURCE...ADD SUBSOURCE.

  • Dropping columns that were added after the source was created. These columns are never ingested, so you can drop them without issue.

  • Adding or removing NOT NULL constraints to tables that were nullable when the source was created.

Incompatible schema changes

All other schema changes to upstream tables will set the corresponding subsource into an error state, which prevents you from reading from the source.

To handle incompatible schema changes, use DROP SOURCE and ALTER SOURCE...ADD SUBSOURCE to first drop the affected subsource, and then add the table back to the source. When you add the subsource, it will have the updated schema from the corresponding upstream table.

Publication membership

PostgreSQL’s logical replication API does not provide a signal when users remove tables from publications. Because of this, Materialize relies on periodic checks to determine if a table has been removed from a publication, at which time it generates an irrevocable error, preventing any values from being read from the table.

However, it is possible to remove a table from a publication and then re-add it before Materialize notices that the table was removed. In this case, Materialize can no longer provide any consistency guarantees about the data we present from the table and, unfortunately, is wholly unaware that this occurred.

To mitigate this issue, if you need to drop and re-add a table to a publication, ensure that you remove the table/subsource from the source before re-adding it using the DROP SOURCE command.

Supported types

Materialize natively supports the following PostgreSQL types (including the array type for each of the types):

  • bool
  • bpchar
  • bytea
  • char
  • date
  • daterange
  • float4
  • float8
  • int2
  • int2vector
  • int4
  • int4range
  • int8
  • int8range
  • interval
  • json
  • jsonb
  • numeric
  • numrange
  • oid
  • text
  • time
  • timestamp
  • timestamptz
  • tsrange
  • tstzrange
  • uuid
  • varchar

Replicating tables that contain unsupported data types is possible via the TEXT COLUMNS option. The specified columns will be treated as text, and will thus not offer the expected PostgreSQL type features. For example:

  • enum: the implicit ordering of the original PostgreSQL enum type is not preserved, as Materialize will sort values as text.

  • money: the resulting text value cannot be cast back to e.g. numeric, since PostgreSQL adds typical currency formatting to the output.

Truncation

Tables replicated into Materialize should not be truncated. If a table is truncated while replicated, the whole source becomes inaccessible and will not produce any data until it is recreated. Instead, remove all rows from a table using an unqualified DELETE.

DELETE FROM t;
Inherited tables

When using PostgreSQL table inheritance, PostgreSQL serves data from SELECTs as if the inheriting tables’ data is also present in the inherited table. However, both PostgreSQL’s logical replication and COPY only present data written to the tables themselves, i.e. the inheriting data is not treated as part of the inherited table.

PostgreSQL sources use logical replication and COPY to ingest table data, so inheriting tables’ data will only be ingested as part of the inheriting table, i.e. in Materialize, the data will not be returned when serving SELECTs from the inherited table.

You can mimic PostgreSQL’s SELECT behavior with inherited tables by creating a materialized view that unions data from the inherited and inheriting tables (using UNION ALL). However, if new tables inherit from the table, data from the inheriting tables will not be available in the view. You will need to add the inheriting tables via ADD SUBSOURCE and create a new view (materialized or non-) that unions the new table.

Examples

WARNING! Before creating a PostgreSQL source, you must set up logical replication in the upstream database. For step-by-step instructions, see the integration guide for your PostgreSQL service: AlloyDB, Amazon RDS, Amazon Aurora, Azure DB, Google Cloud SQL, Self-hosted.

Creating a connection

A connection describes how to connect and authenticate to an external system you want Materialize to read data from.

Once created, a connection is reusable across multiple CREATE SOURCE statements. For more details on creating connections, check the CREATE CONNECTION documentation page.

CREATE SECRET pgpass AS '<POSTGRES_PASSWORD>';

CREATE CONNECTION pg_connection TO POSTGRES (
    HOST 'instance.foo000.us-west-1.rds.amazonaws.com',
    PORT 5432,
    USER 'postgres',
    PASSWORD SECRET pgpass,
    SSL MODE 'require',
    DATABASE 'postgres'
);

If your PostgreSQL server is not exposed to the public internet, you can tunnel the connection through an AWS PrivateLink service or an SSH bastion host.

CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
    SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
    AVAILABILITY ZONES ('use1-az1', 'use1-az4')
);
CREATE SECRET pgpass AS '<POSTGRES_PASSWORD>';

CREATE CONNECTION pg_connection TO POSTGRES (
    HOST 'instance.foo000.us-west-1.rds.amazonaws.com',
    PORT 5432,
    USER 'postgres',
    PASSWORD SECRET pgpass,
    AWS PRIVATELINK privatelink_svc,
    DATABASE 'postgres'
);

For step-by-step instructions on creating AWS PrivateLink connections and configuring an AWS PrivateLink service to accept connections from Materialize, check this guide.

CREATE CONNECTION ssh_connection TO SSH TUNNEL (
    HOST 'bastion-host',
    PORT 22,
    USER 'materialize',
);
CREATE CONNECTION pg_connection TO POSTGRES (
    HOST 'instance.foo000.us-west-1.rds.amazonaws.com',
    PORT 5432,
    SSH TUNNEL ssh_connection,
    DATABASE 'postgres'
);

For step-by-step instructions on creating SSH tunnel connections and configuring an SSH bastion server to accept connections from Materialize, check this guide.

Creating a source

Create subsources for all tables included in the PostgreSQL publication

CREATE SOURCE mz_source
    FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
    FOR ALL TABLES;

Create subsources for all tables from specific schemas included in the PostgreSQL publication

CREATE SOURCE mz_source
  FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
  FOR SCHEMAS (public, project);

Create subsources for specific tables included in the PostgreSQL publication

CREATE SOURCE mz_source
  FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
  FOR TABLES (table_1, table_2 AS alias_table_2);

Handling unsupported types

If the publication contains tables that use data types unsupported by Materialize, use the TEXT COLUMNS option to decode data as text for the affected columns. This option expects the upstream names of the replicated table and column (i.e. as defined in your PostgreSQL database).

CREATE SOURCE mz_source
  FROM POSTGRES CONNECTION pg_connection (
    PUBLICATION 'mz_source',
    TEXT COLUMNS (upstream_table_name.column_of_unsupported_type)
  ) FOR ALL TABLES;

Handling errors and schema changes

To handle upstream schema changes or errored subsources, use the DROP SOURCE syntax to drop the affected subsource, and then ALTER SOURCE...ADD SUBSOURCE to add the subsource back to the source.

-- List all subsources in mz_source
SHOW SUBSOURCES ON mz_source;

-- Get rid of an outdated or errored subsource
DROP SOURCE table_1;

-- Start ingesting the table with the updated schema or fix
ALTER SOURCE mz_source ADD SUBSOURCE table_1;

Adding subsources

When adding subsources to a PostgreSQL source, Materialize opens a temporary replication slot to snapshot the new subsources’ current states. After completing the snapshot, the table will be kept up-to-date, like all other tables in the publication.

Dropping subsources

Dropping a subsource prevents Materialize from ingesting any data from it, in addition to dropping any state that Materialize previously had for the table.

Back to top ↑