Ingest data from Amazon RDS

This page shows you how to stream data from Amazon RDS for MySQL to Materialize using the MySQL source.

💡 Tip: For help getting started with your own data, you can schedule a free guided trial.

Before you begin

A. Configure Amazon RDS

1. Enable GTID-based binlog replication

Before creating a source in Materialize, you must configure Amazon RDS for GTID-based binlog replication. For guidance on enabling GTID-based binlog replication in RDS, see the Amazon RDS for MySQL documentation.

  1. Enable automated backups in your RDS instance by setting the backup retention period to a value greater than 0 to enable binary logging.

  2. Create a custom RDS parameter group.

    • Set Parameter group family to your MySQL version.
    • Set Type to DB Parameter Group.
  3. Edit the new parameter group to set the configuration parameters to the following values:

    Configuration parameter Value Details
    log_bin_use_v1_row_events ON AWS Management Console equivalent to MySQL’s log_bin configuration parameter.
    binlog_format ROW This configuration is deprecated as of MySQL 8.0.34. Newer versions of MySQL default to row-based logging.
    binlog_row_image FULL
    gtid-mode ON AWS Management Console equivalent to MySQL’s gtid_mode configuration parameter.
    enforce_gtid_consistency ON
    replica_preserve_commit_order ON Only required when connecting Materialize to a read-replica for replication, rather than the primary server.
  4. Associate the RDS parameter group to your database.

    Use the Apply Immediately option. The database must be rebooted in order for the parameter group association to take effect. Keep in mind that rebooting the RDS instance can affect database performance.

    Do not move on to the next step until the database Status is Available in the RDS Console.

  5. In addition to the step above, you must also ensure that binlog retention is set to a reasonable value. To check the current value of the binlog retention hours configuration parameter, connect to your RDS instance and run:

    CALL mysql.rds_show_configuration;
    

    If the value returned is NULL, or less than 168 (i.e. 7 days), run:

    CALL mysql.rds_set_configuration('binlog retention hours', 168);
    

    Although 7 days is a reasonable retention period, we recommend using the default MySQL retention period (30 days) in order to not compromise Materialize’s ability to resume replication in case of failures or restarts.

  6. To validate that all configuration parameters are set to the expected values after the above configuration changes, run:

    -- Validate "binlog retention hours" configuration parameter
    CALL mysql.rds_show_configuration;
    
    -- Validate parameter group configuration parameters
    SHOW VARIABLES WHERE variable_name IN (
      'log_bin',
      'binlog_format',
      'binlog_row_image',
      'gtid_mode',
      'enforce_gtid_consistency',
      'replica_preserve_commit_order'
    );
    

2. Create a user for replication

Once GTID-based binlog replication is enabled, we recommend creating a dedicated user for Materialize with sufficient privileges to manage replication.

  1. As a superuser, use mysql (or your preferred SQL client) to connect to your database.

  2. Create a dedicated user for Materialize, if you don’t already have one:

    CREATE USER 'materialize'@'%' IDENTIFIED BY '<password>';
    
    ALTER USER 'materialize'@'%' REQUIRE SSL;
    
  3. Grant the user permission to manage replication:

    GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT, LOCK TABLES ON *.* TO 'materialize'@'%';
    

    Once connected to your database, Materialize will take an initial snapshot of the tables in your MySQL server. SELECT privileges are required for this initial snapshot.

  4. Apply the changes:

    FLUSH PRIVILEGES;
    

B. (Optional) Configure network security

NOTE: If you are prototyping and your RDS instance is publicly accessible, you can skip this step. For production scenarios, we recommend configuring one of the network security options below.

There are various ways to configure your database’s network to allow Materialize to connect:

  • Allow Materialize IPs: If your database is publicly accessible, you can configure your database’s security group to allow connections from a set of static Materialize IP addresses.

  • Use an SSH tunnel: If your database is running in a private network, you can use an SSH tunnel to connect Materialize to the database.

  • Use AWS PrivateLink: If your database is running in a private network, you can use AWS PrivateLink to connect Materialize to the database. For details, see AWS PrivateLink.

  1. In the SQL Shell, or your preferred SQL client connected to Materialize, find the static egress IP addresses for the Materialize region you are running in:

    SELECT * FROM mz_egress_ips;
    
  2. In the RDS Console, add an inbound rule to your RDS security group for each IP address from the previous step.

    In each rule:

    • Set Type to MySQL.
    • Set Source to the IP address in CIDR notation.

AWS PrivateLink lets you connect Materialize to your RDS instance without exposing traffic to the public internet. To use AWS PrivateLink, you create a network load balancer in the same VPC as your RDS instance and a VPC endpoint service that Materialize connects to. The VPC endpoint service then routes requests from Materialize to RDS via the network load balancer.

NOTE: Materialize provides a Terraform module that automates the creation and configuration of AWS resources for a PrivateLink connection. For more details, see the Terraform module repository.
  1. Get the IP address of your RDS instance. You’ll need this address to register your RDS instance as the target for the network load balancer in the next step.

    To get the IP address of your RDS instance:

    1. Select your database in the RDS Console.

    2. Find your RDS endpoint under Connectivity & security.

    3. Use the dig or nslooklup command to find the IP address that the endpoint resolves to:

      dig +short <RDS_ENDPOINT>
      
  2. Create a dedicated target group for your RDS instance.

    • Choose the IP addresses type.

    • Set the protocol and port to TCP and 5432.

    • Choose the same VPC as your RDS instance.

    • Use the IP address from the previous step to register your RDS instance as the target.

    Warning: The IP address of your RDS instance can change without notice. For this reason, it’s best to set up automation to regularly check the IP of the instance and update your target group accordingly. You can use a lambda function to automate this process - see Materialize’s Terraform module for AWS PrivateLink for an example. Another approach is to configure an EC2 instance as an RDS router for your network load balancer.

  3. Create a network load balancer.

    • For Network mapping, choose the same VPC as your RDS instance and select all of the availability zones and subnets that you RDS instance is in.

    • For Listeners and routing, set the protocol and port to TCP and 5432 and select the target group you created in the previous step.

  4. In the security group of your RDS instance, allow traffic from the network load balancer.

    If client IP preservation is disabled, the easiest approach is to add an inbound rule with the VPC CIDR of the network load balancer. If you don’t want to grant access to the entire VPC CIDR, you can add inbound rules for the private IP addresses of the load balancer subnets.

    • To find the VPC CIDR, go to your network load balancer and look under Network mapping.
    • To find the private IP addresses of the load balancer subnets, go to Network Interfaces, search for the name of the network load balancer, and look on the Details tab for each matching network interface.
  5. Create a VPC endpoint service.

    • For Load balancer type, choose Network and then select the network load balancer you created in the previous step.

    • After creating the VPC endpoint service, note its Service name. You’ll use this service name when connecting Materialize later.

    Remarks: By disabling Acceptance Required, while still strictly managing who can view your endpoint via IAM, Materialze will be able to seamlessly recreate and migrate endpoints as we work to stabilize this feature.

  6. Go back to the target group you created for the network load balancer and make sure that the health checks are reporting the targets as healthy.

To create an SSH tunnel from Materialize to your database, you launch an instance to serve as an SSH bastion host, configure the bastion host to allow traffic only from Materialize, and then configure your database’s private network to allow traffic from the bastion host.

NOTE: Materialize provides a Terraform module that automates the creation and configuration of resources for an SSH tunnel. For more details, see the Terraform module repository.
  1. Launch an EC2 instance to serve as your SSH bastion host.

    • Make sure the instance is publicly accessible and in the same VPC as your RDS instance.
    • Add a key pair and note the username. You’ll use this username when connecting Materialize to your bastion host.

    Warning: Auto-assigned public IP addresses can change in certain cases.

    For this reason, it’s best to associate an elastic IP address to your bastion host.

  2. Configure the SSH bastion host to allow traffic only from Materialize.

    1. In the SQL Shell, or your preferred SQL client connected to Materialize, get the static egress IP addresses for the Materialize region you are running in:

      SELECT * FROM mz_egress_ips;
      
    2. For each static egress IP, add an inbound rule to your SSH bastion host’s security group.

      In each rule:

      • Set Type to MySQL.
      • Set Source to the IP address in CIDR notation.
  3. In the security group of your RDS instance, add an inbound rule to allow traffic from the SSH bastion host.

    • Set Type to All TCP.
    • Set Source to Custom and select the bastion host’s security group.

C. Ingest data in Materialize

1. (Optional) Create a cluster

NOTE: If you are prototyping and already have a cluster to host your MySQL source (e.g. quickstart), you can skip this step. For production scenarios, we recommend separating your workloads into multiple clusters for resource isolation.

In Materialize, a cluster is an isolated environment, similar to a virtual warehouse in Snowflake. When you create a cluster, you choose the size of its compute resource allocation based on the work you need the cluster to do, whether ingesting data from a source, computing always-up-to-date query results, serving results to clients, or a combination.

In this case, you’ll create a dedicated cluster for ingesting source data from your MySQL database.

  1. In the SQL Shell, or your preferred SQL client connected to Materialize, use the CREATE CLUSTER command to create the new cluster:

    CREATE CLUSTER ingest_mysql (SIZE = '200cc');
    
    SET CLUSTER = ingest_mysql;
    

    A cluster of size 200cc should be enough to process the initial snapshot of the tables in your MySQL database. For very large snapshots, consider using a larger size to speed up processing. Once the snapshot is finished, you can readjust the size of the cluster to fit the volume of changes being replicated from your upstream MySQL database.

2. Start ingesting data

Now that you’ve configured your database network and created an ingestion cluster, you can connect Materialize to your MySQL database and start ingesting data. The exact steps depend on your networking configuration, so start by selecting the relevant option.

  1. In the SQL Shell, or your preferred SQL client connected to Materialize, use the CREATE SECRET command to securely store the password for the materialize MySQL user you created earlier:

    CREATE SECRET mysqlpass AS '<PASSWORD>';
    
  2. Use the CREATE CONNECTION command to create a connection object with access and authentication details for Materialize to use:

    CREATE CONNECTION mysql_connection TO MYSQL (
        HOST <host>,
        PORT 3306,
        USER 'materialize',
        PASSWORD SECRET mysqlpass,
        SSL MODE REQUIRED
    );
    
    • Replace <host> with your MySQL endpoint.
  3. Use the CREATE SOURCE command to connect Materialize to your Azure instance and start ingesting data:

    CREATE SOURCE mz_source
      FROM mysql CONNECTION mysql_connection
      FOR ALL TABLES;
    
    • By default, the source will be created in the active cluster; to use a different cluster, use the IN CLUSTER clause.

    • To ingest data from specific schemas or tables, use the FOR SCHEMAS (<schema1>,<schema2>) or FOR TABLES (<table1>, <table2>) options instead of FOR ALL TABLES.

    • To handle unsupported data types, use the TEXT COLUMNS or IGNORE COLUMNS options. Check out the reference documentation for guidance.

  4. After source creation, you can handle upstream schema changes by dropping and recreating the source.

  1. In the SQL Shell, or your preferred SQL client connected to Materialize, use the CREATE CONNECTION command to create an SSH tunnel connection:

    CREATE CONNECTION ssh_connection TO SSH TUNNEL (
        HOST '<SSH_BASTION_HOST>',
        PORT <SSH_BASTION_PORT>,
        USER '<SSH_BASTION_USER>'
    );
    
    • Replace <SSH_BASTION_HOST> and <SSH_BASTION_PORT> with the public IP address and port of the SSH bastion host you created earlier.

    • Replace <SSH_BASTION_USER> with the username for the key pair you created for your SSH bastion host.

  2. Get Materialize’s public keys for the SSH tunnel connection:

    SELECT * FROM mz_ssh_tunnel_connections;
    
  3. Log in to your SSH bastion host and add Materialize’s public keys to the authorized_keys file, for example:

    # Command for Linux
    echo "ssh-ed25519 AAAA...76RH materialize" >> ~/.ssh/authorized_keys
    echo "ssh-ed25519 AAAA...hLYV materialize" >> ~/.ssh/authorized_keys
    
  4. Back in the SQL client connected to Materialize, validate the SSH tunnel connection you created using the VALIDATE CONNECTION command:

    VALIDATE CONNECTION ssh_connection;
    

    If no validation error is returned, move to the next step.

  5. Use the CREATE SECRET command to securely store the password for the materialize MySQL user you created earlier:

    CREATE SECRET mysqlpass AS '<PASSWORD>';
    
  6. Use the CREATE CONNECTION command to create another connection object, this time with database access and authentication details for Materialize to use:

    CREATE CONNECTION mysql_connection TO MYSQL (
    HOST '<host>',
    SSH TUNNEL ssh_connection
    );
    
    • Replace <host> with your MySQL endpoint.
  7. Use the CREATE SOURCE command to connect Materialize to your Azure instance and start ingesting data:

    CREATE SOURCE mz_source
      FROM mysql CONNECTION mysql_connection
      FOR ALL TABLES;
    
    • By default, the source will be created in the active cluster; to use a different cluster, use the IN CLUSTER clause.

    • To ingest data from specific schemas or tables, use the FOR SCHEMAS (<schema1>,<schema2>) or FOR TABLES (<table1>, <table2>) options instead of FOR ALL TABLES.

    • To handle unsupported data types, use the TEXT COLUMNS or IGNORE COLUMNS options. Check out the reference documentation for guidance.

3. Monitor the ingestion status

Before it starts consuming the replication stream, Materialize takes a snapshot of the relevant tables. Until this snapshot is complete, Materialize won’t have the same view of your data as your MySQL database.

In this step, you’ll first verify that the source is running and then check the status of the snapshotting process.

  1. Back in the SQL client connected to Materialize, use the mz_source_statuses table to check the overall status of your source:

    WITH
      source_ids AS
      (SELECT id FROM mz_sources WHERE name = 'mz_source')
    SELECT *
    FROM
      mz_internal.mz_source_statuses
        JOIN
          (
            SELECT referenced_object_id
            FROM mz_internal.mz_object_dependencies
            WHERE
              object_id IN (SELECT id FROM source_ids)
            UNION SELECT id FROM source_ids
          )
          AS sources
        ON mz_source_statuses.id = sources.referenced_object_id;
    

    For each subsource, make sure the status is running. If you see stalled or failed, there’s likely a configuration issue for you to fix. Check the error field for details and fix the issue before moving on. Also, if the status of any subsource is starting for more than a few minutes, contact our team.

  2. Once the source is running, use the mz_source_statistics table to check the status of the initial snapshot:

    WITH
      source_ids AS
      (SELECT id FROM mz_sources WHERE name = 'mz_source')
    SELECT sources.referenced_object_id AS id, mz_sources.name, snapshot_committed
    FROM
      mz_internal.mz_source_statistics
        JOIN
          (
            SELECT object_id, referenced_object_id
            FROM mz_internal.mz_object_dependencies
            WHERE
              object_id IN (SELECT id FROM source_ids)
            UNION SELECT id, id FROM source_ids
          )
          AS sources
        ON mz_source_statistics.id = sources.referenced_object_id
        JOIN mz_sources ON mz_sources.id = sources.referenced_object_id;
    

    object_id | snapshot_committed
    ----------|------------------
     u144     | t
    (1 row)
    

    Once snapshot_commited is t, move on to the next step. Snapshotting can take between a few minutes to several hours, depending on the size of your dataset and the size of the cluster the source is running in.

4. Right-size the cluster

After the snapshotting phase, Materialize starts ingesting change events from the MySQL replication stream. For this work, Materialize generally performs well with a 100cc replica, so you can resize the cluster accordingly.

  1. Still in a SQL client connected to Materialize, use the ALTER CLUSTER command to downsize the cluster to 100cc:

    ALTER CLUSTER ingest_mysql SET (SIZE '100cc');
    

    Behind the scenes, this command adds a new 100cc replica and removes the 200cc replica.

  2. Use the SHOW CLUSTER REPLICAS command to check the status of the new replica:

    SHOW CLUSTER REPLICAS WHERE cluster = 'ingest_mysql';
    

         cluster     | replica |  size  | ready
    -----------------+---------+--------+-------
     ingest_mysql    | r1      | 100cc  | t
    (1 row)
    

Next steps

With Materialize ingesting your MySQL data into durable storage, you can start exploring the data, computing real-time results that stay up-to-date as new data arrives, and serving results efficiently.

Back to top ↑