Ingest data from Google Cloud SQL
This page shows you how to stream data from Google Cloud SQL for MySQL to Materialize using theMySQL source.
Before you begin
-
Make sure you are running MySQL 5.7 or higher. Materialize uses GTID-based binary log (binlog) replication, which is not available in older versions of MySQL.
-
Ensure you have access to your MySQL instance via the
mysql
client, or your preferred SQL client.
A. Configure Google Cloud SQL
1. Enable GTID-based binlog replication
Before creating a source in Materialize, you must configure Google Cloud SQL for MySQL for GTID-based binlog replication. This requires the following configuration changes:
Configuration parameter | Value | Details |
---|---|---|
log_bin |
ON |
|
binlog_format |
ROW |
This configuration is deprecated as of MySQL 8.0.34. Newer versions of MySQL default to row-based logging. |
binlog_row_image |
FULL |
|
gtid_mode |
ON |
|
enforce_gtid_consistency |
ON |
|
replica_preserve_commit_order |
ON |
Only required when connecting Materialize to a read-replica for replication, rather than the primary server. |
For guidance on enabling GTID-based binlog replication in Cloud SQL, see the Cloud SQL documentation.
2. Create a user for replication
Once GTID-based binlog replication is enabled, we recommend creating a dedicated user for Materialize with sufficient privileges to manage replication.
-
As a superuser, use
mysql
(or your preferred SQL client) to connect to your database. -
Create a dedicated user for Materialize, if you don’t already have one:
CREATE USER 'materialize'@'%' IDENTIFIED BY '<password>'; ALTER USER 'materialize'@'%' REQUIRE SSL;
-
Grant the user permission to manage replication:
GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT, LOCK TABLES ON *.* TO 'materialize'@'%';
Once connected to your database, Materialize will take an initial snapshot of the tables in your MySQL server.
SELECT
privileges are required for this initial snapshot. -
Apply the changes:
FLUSH PRIVILEGES;
B. (Optional) Configure network security
There are various ways to configure your database’s network to allow Materialize to connect:
-
Allow Materialize IPs: If your database is publicly accessible, you can configure your database’s firewall to allow connections from a set of static Materialize IP addresses.
-
Use an SSH tunnel: If your database is running in a private network, you can use an SSH tunnel to connect Materialize to the database.
Select the option that works best for you.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, find the static egress IP addresses for the Materialize region you are running in:
SELECT * FROM mz_egress_ips;
-
Update your Google Cloud SQL firewall rules to allow traffic from each IP address from the previous step.
To create an SSH tunnel from Materialize to your database, you launch an instance to serve as an SSH bastion host, configure the bastion host to allow traffic only from Materialize, and then configure your database’s private network to allow traffic from the bastion host.
-
Launch a GCE instance to serve as your SSH bastion host.
- Make sure the instance is publicly accessible and in the same VPC as your database.
- Add a key pair and note the username. You’ll use this username when connecting Materialize to your bastion host.
- Make sure the VM has a static public IP address. You’ll use this IP address when connecting Materialize to your bastion host.
-
Configure the SSH bastion host to allow traffic only from Materialize.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, get the static egress IP addresses for the Materialize region you are running in:
SELECT * FROM mz_egress_ips;
-
Update your SSH bastion host’s firewall rules to allow traffic from each IP address from the previous step.
-
-
Update your Google Cloud SQL firewall rules to allow traffic from the SSH bastion host.
C. Ingest data in Materialize
1. (Optional) Create a cluster
quickstart
), you can skip this step. For production
scenarios, we recommend separating your workloads into multiple clusters for
resource isolation.
In Materialize, a cluster is an isolated environment, similar to a virtual warehouse in Snowflake. When you create a cluster, you choose the size of its compute resource allocation based on the work you need the cluster to do, whether ingesting data from a source, computing always-up-to-date query results, serving results to clients, or a combination.
In this case, you’ll create a dedicated cluster for ingesting source data from your MySQL database.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, use the
CREATE CLUSTER
command to create the new cluster:CREATE CLUSTER ingest_mysql (SIZE = '200cc'); SET CLUSTER = ingest_mysql;
A cluster of size
200cc
should be enough to process the initial snapshot of the tables in your MySQL database. For very large snapshots, consider using a larger size to speed up processing. Once the snapshot is finished, you can readjust the size of the cluster to fit the volume of changes being replicated from your upstream MySQL database.
2. Start ingesting data
Now that you’ve configured your database network, you can connect Materialize to your MySQL database and start ingesting data. The exact steps depend on your networking configuration, so start by selecting the relevant option.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, use the
CREATE SECRET
command to securely store the password for thematerialize
MySQL user you created earlier:CREATE SECRET mysqlpass AS '<PASSWORD>';
-
Use the
CREATE CONNECTION
command to create a connection object with access and authentication details for Materialize to use:CREATE CONNECTION mysql_connection TO MYSQL ( HOST <host>, PORT 3306, USER 'materialize', PASSWORD SECRET mysqlpass, SSL MODE REQUIRED );
- Replace
<host>
with your MySQL endpoint.
- Replace
-
Use the
CREATE SOURCE
command to connect Materialize to your Azure instance and start ingesting data:CREATE SOURCE mz_source FROM mysql CONNECTION mysql_connection FOR ALL TABLES;
-
By default, the source will be created in the active cluster; to use a different cluster, use the
IN CLUSTER
clause. -
To ingest data from specific schemas or tables, use the
FOR SCHEMAS (<schema1>,<schema2>)
orFOR TABLES (<table1>, <table2>)
options instead ofFOR ALL TABLES
. -
To handle unsupported data types, use the
TEXT COLUMNS
orIGNORE COLUMNS
options. Check out the reference documentation for guidance.
-
-
After source creation, you can handle upstream schema changes by dropping and recreating the source.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, use the
CREATE CONNECTION
command to create an SSH tunnel connection:CREATE CONNECTION ssh_connection TO SSH TUNNEL ( HOST '<SSH_BASTION_HOST>', PORT <SSH_BASTION_PORT>, USER '<SSH_BASTION_USER>' );
-
Replace
<SSH_BASTION_HOST>
and<SSH_BASTION_PORT
> with the public IP address and port of the SSH bastion host you created earlier. -
Replace
<SSH_BASTION_USER>
with the username for the key pair you created for your SSH bastion host.
-
-
Get Materialize’s public keys for the SSH tunnel connection:
SELECT * FROM mz_ssh_tunnel_connections;
-
Log in to your SSH bastion host and add Materialize’s public keys to the
authorized_keys
file, for example:# Command for Linux echo "ssh-ed25519 AAAA...76RH materialize" >> ~/.ssh/authorized_keys echo "ssh-ed25519 AAAA...hLYV materialize" >> ~/.ssh/authorized_keys
-
Back in the SQL client connected to Materialize, validate the SSH tunnel connection you created using the
VALIDATE CONNECTION
command:VALIDATE CONNECTION ssh_connection;
If no validation error is returned, move to the next step.
-
Use the
CREATE SECRET
command to securely store the password for thematerialize
MySQL user you created earlier:CREATE SECRET mysqlpass AS '<PASSWORD>';
-
Use the
CREATE CONNECTION
command to create another connection object, this time with database access and authentication details for Materialize to use:CREATE CONNECTION mysql_connection TO MYSQL ( HOST '<host>', SSH TUNNEL ssh_connection );
- Replace
<host>
with your MySQL endpoint.
- Replace
-
Use the
CREATE SOURCE
command to connect Materialize to your Azure instance and start ingesting data:CREATE SOURCE mz_source FROM mysql CONNECTION mysql_connection FOR ALL TABLES;
-
By default, the source will be created in the active cluster; to use a different cluster, use the
IN CLUSTER
clause. -
To ingest data from specific schemas or tables, use the
FOR SCHEMAS (<schema1>,<schema2>)
orFOR TABLES (<table1>, <table2>)
options instead ofFOR ALL TABLES
. -
To handle unsupported data types, use the
TEXT COLUMNS
orIGNORE COLUMNS
options. Check out the reference documentation for guidance.
-
3. Monitor the ingestion status
Before it starts consuming the replication stream, Materialize takes a snapshot of the relevant tables. Until this snapshot is complete, Materialize won’t have the same view of your data as your MySQL database.
In this step, you’ll first verify that the source is running and then check the status of the snapshotting process.
-
Back in the SQL client connected to Materialize, use the
mz_source_statuses
table to check the overall status of your source:WITH source_ids AS (SELECT id FROM mz_sources WHERE name = 'mz_source') SELECT * FROM mz_internal.mz_source_statuses JOIN ( SELECT referenced_object_id FROM mz_internal.mz_object_dependencies WHERE object_id IN (SELECT id FROM source_ids) UNION SELECT id FROM source_ids ) AS sources ON mz_source_statuses.id = sources.referenced_object_id;
For each
subsource
, make sure thestatus
isrunning
. If you seestalled
orfailed
, there’s likely a configuration issue for you to fix. Check theerror
field for details and fix the issue before moving on. Also, if thestatus
of any subsource isstarting
for more than a few minutes, contact our team. -
Once the source is running, use the
mz_source_statistics
table to check the status of the initial snapshot:WITH source_ids AS (SELECT id FROM mz_sources WHERE name = 'mz_source') SELECT sources.referenced_object_id AS id, mz_sources.name, snapshot_committed FROM mz_internal.mz_source_statistics JOIN ( SELECT object_id, referenced_object_id FROM mz_internal.mz_object_dependencies WHERE object_id IN (SELECT id FROM source_ids) UNION SELECT id, id FROM source_ids ) AS sources ON mz_source_statistics.id = sources.referenced_object_id JOIN mz_sources ON mz_sources.id = sources.referenced_object_id;
object_id | snapshot_committed ----------|------------------ u144 | t (1 row)
Once
snapshot_commited
ist
, move on to the next step. Snapshotting can take between a few minutes to several hours, depending on the size of your dataset and the size of the cluster the source is running in.
4. Right-size the cluster
After the snapshotting phase, Materialize starts ingesting change events from
the MySQL replication stream. For this work, Materialize generally
performs well with a 100cc
replica, so you can resize the cluster
accordingly.
-
Still in a SQL client connected to Materialize, use the
ALTER CLUSTER
command to downsize the cluster to100cc
:ALTER CLUSTER ingest_mysql SET (SIZE '100cc');
Behind the scenes, this command adds a new
100cc
replica and removes the200cc
replica. -
Use the
SHOW CLUSTER REPLICAS
command to check the status of the new replica:SHOW CLUSTER REPLICAS WHERE cluster = 'ingest_mysql';
cluster | replica | size | ready -----------------+---------+--------+------- ingest_mysql | r1 | 100cc | t (1 row)
Next steps
With Materialize ingesting your MySQL data into durable storage, you can start exploring the data, computing real-time results that stay up-to-date as new data arrives, and serving results efficiently.
-
Explore your data with
SHOW SOURCES
andSELECT
. -
Compute real-time results in memory with
CREATE VIEW
andCREATE INDEX
or in durable storage withCREATE MATERIALIZED VIEW
. -
Serve results to a PostgreSQL-compatible SQL client or driver with
SELECT
orSUBSCRIBE
or to an external message broker withCREATE SINK
. -
Check out the tools and integrations supported by Materialize.