Ingest data from self-hosted SQL Server
This page shows you how to stream data from a self-hosted SQL Server database to Materialize using the SQL Server Source.
Before you begin
-
Make sure you are running SQL Server 2016 or higher with Change Data Capture (CDC) support. Materialize uses Change Data Capture which is not readily available on older versions of SQL Server.
-
Ensure you have access to your SQL Server instance via the
sqlcmd
client, or your preferred SQL client.
A. Configure SQL Server
1. Create a Materialize user in SQL Server.
Create a user that Materialize will use to connect when ingesting data.
-
In
master
:-
Create a login
materialize
(replace<PASSWORD>
with your own password):USE master; -- Specify additional options per your company's security policy CREATE LOGIN materialize WITH PASSWORD = '<PASSWORD>', DEFAULT_DATABASE = <DATABASE_NAME>; GO -- The GO terminator may be unsupported or unnecessary for your client.
-
Create a user
materialize
for the login and rolematerialize_role
:USE master; CREATE USER materialize FOR LOGIN materialize; CREATE ROLE materialize_role; ALTER ROLE materialize_role ADD MEMBER materialize; GO -- The GO terminator may be unsupported or unnecessary for your client.
-
Grant permissions to the
materialize_role
to enable discovery of the tables to be replicated and monitoring replication progress:USE master; -- Required for schema discovery for replicated tables. GRANT SELECT ON INFORMATION_SCHEMA.KEY_COLUMN_USAGE TO materialize_role; GRANT SELECT ON INFORMATION_SCHEMA.TABLE_CONSTRAINTS TO materialize_role; GRANT SELECT ON OBJECT::INFORMATION_SCHEMA.TABLE_CONSTRAINTS TO materialize_role; -- Allows checking the minimum and maximum Log Sequence Numbers (LSN) for CDC, -- required for the Source to be able to track progress. GRANT EXECUTE ON sys.fn_cdc_get_min_lsn TO materialize_role; GRANT EXECUTE ON sys.fn_cdc_get_max_lsn TO materialize_role; GRANT EXECUTE ON sys.fn_cdc_increment_lsn TO materialize_role; GO -- The GO terminator may be unsupported or unnecessary for your client.
-
-
In the database from which which you want to ingest data, create a second
materialize
user withdb_datareader
role (replace<DATABASE_NAME>
with your database name):USE <DATABASE_NAME>; -- Use the same user name as the one created in master CREATE USER materialize FOR LOGIN materialize; ALTER ROLE db_datareader ADD MEMBER materialize; GO -- The GO terminator may be unsupported or unnecessary for your client.
2. Enable Change-Data-Capture for the database.
In SQL Server, for the database from which you want to ingest data, enable
change data capture (replace <DATABASE_NAME>
with your database name):
USE <DATABASE_NAME>;
GO -- The GO terminator may be unsupported or unnecessary for your client.
EXEC sys.sp_cdc_enable_db;
GO -- The GO terminator may be unsupported or unnecessary for your client.
For guidance on enabling Change Data Capture, see the SQL Server documentation.
3. Enable SNAPSHOT
transaction isolation.
Enable SNAPSHOT
transaction isolation for the database (replace
<DATABASE_NAME>
with your database name):
ALTER DATABASE <DATABASE_NAME> SET ALLOW_SNAPSHOT_ISOLATION ON;
GO -- The GO terminator may be unsupported or unnecessary for your client.
For guidance on enabling SNAPSHOT
transaction isolation, see the SQL Server documentation
4. Enable Change-Data-Capture for the tables.
Enable Change Data Capture for each table you wish to replicate (replace
<DATABASE_NAME>
, <SCHEMA_NAME>
, and <TABLE_NAME>
with the your database,
schema name, and table name):
USE <DATABASE_NAME>;
EXEC sys.sp_cdc_enable_table
@source_schema = '<SCHEMA_NAME>',
@source_name = '<TABLE_NAME>',
@role_name = 'materialize_role',
@supports_net_changes = 0;
GO -- The GO terminator may be unsupported or unnecessary for your client.
B. (Optional) Configure network security
There are various ways to configure your database’s network to allow Materialize to connect:
-
Allow Materialize IPs: If your database is publicly accessible, you can configure your database’s firewall to allow connections from a set of static Materialize IP addresses.
-
Use an SSH tunnel: If your database is running in a private network, you can use an SSH tunnel to connect Materialize to the database.
Select the option that works best for you.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, find the static egress IP addresses for the Materialize region you are running in:
SELECT * FROM mz_egress_ips;
-
Update your database firewall rules to allow traffic from each IP address from the previous step.
Materialize can connect to a SQL Server database through an AWS PrivateLink service. Your SQL Server database must be running on AWS in order to use this option.
-
Create a target group
Create a dedicated target group for your SQL Server instance with the following details:
a. Target type as IP address.
b. Protocol as TCP.
c. Port as 1433, or the port that you are using in case it is not 1433.
d. Make sure that the target group is in the same VPC as the SQL Server instance.
e. Click next, and register the respective SQL Server instance to the target group using its IP address.
-
Create a Network Load Balancer (NLB)
Create a Network Load Balancer that is enabled for the same subnets that the SQL Server instance is in.
-
Create TCP listener
Create a TCP listener for your SQL Server instance that forwards to the corresponding target group you created.
-
Verify security groups and health checks
Once the TCP listener has been created, make sure that the health checks are passing and that the target is reported as healthy.
If you have set up a security group for your SQL Server instance, you must ensure that it allows traffic on the health check port.
Remarks:
a. Network Load Balancers do not have associated security groups. Therefore, the security groups for your targets must use IP addresses to allow traffic.
b. You can’t use the security groups for the clients as a source in the security groups for the targets. Therefore, the security groups for your targets must use the IP addresses of the clients to allow traffic. For more details, check the AWS documentation.
-
Create a VPC endpoint service
Create a VPC endpoint service and associate it with the Network Load Balancer that you’ve just created.
Note the service name that is generated for the endpoint service.
Remarks:
By disabling Acceptance Required, while still strictly managing who can view your endpoint via IAM, Materialze will be able to seamlessly recreate and migrate endpoints as we work to stabilize this feature.
-
Create an AWS PrivateLink Connection
In Materialize, create a
AWS PRIVATELINK
connection that references the endpoint service that you created in the previous step.CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK ( SERVICE NAME 'com.amazonaws.vpce.<region_id>.vpce-svc-<endpoint_service_id>', AVAILABILITY ZONES ('use1-az1', 'use1-az2', 'use1-az3') );
Update the list of the availability zones to match the ones that you are using in your AWS account.
-
Configure the AWS PrivateLink service
Retrieve the AWS principal for the AWS PrivateLink connection you just created:
SELECT principal FROM mz_aws_privatelink_connections plc JOIN mz_connections c ON plc.id = c.id WHERE c.name = 'privatelink_svc';
principal --------------------------------------------------------------------------- arn:aws:iam::664411391173:role/mz_20273b7c-2bbe-42b8-8c36-8cc179e9bbc3_u1
Follow the instructions in the AWS PrivateLink documentation to configure your VPC endpoint service to accept connections from the provided AWS principal.
If your AWS PrivateLink service is configured to require acceptance of connection requests, you must manually approve the connection request from Materialize after executing the
CREATE CONNECTION
statement. For more details, check the AWS PrivateLink documentation.Note: It might take some time for the endpoint service connection to show up, so you would need to wait for the endpoint service connection to be ready before you create a source.
To create an SSH tunnel from Materialize to your database, you launch an VM to serve as an SSH bastion host, configure the bastion host to allow traffic only from Materialize, and then configure your database’s private network to allow traffic from the bastion host.
-
Launch a VM to serve as your SSH bastion host.
- Make sure the VM is publicly accessible and in the same VPC as your database.
- Add a key pair and note the username. You’ll use this username when connecting Materialize to your bastion host.
- Make sure the VM has a static public IP address. You’ll use this IP address when connecting Materialize to your bastion host.
-
Configure the SSH bastion host to allow traffic only from Materialize.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, get the static egress IP addresses for the Materialize region you are running in:
SELECT * FROM mz_egress_ips;
-
Update your SSH bastion host’s firewall rules to allow traffic from each IP address from the previous step.
-
-
Update your database firewall rules to allow traffic from the SSH bastion host.
C. Ingest data in Materialize
1. (Optional) Create a cluster
quickstart
), you can skip this step. For production
scenarios, we recommend separating your workloads into multiple clusters for
resource isolation.
In Materialize, a cluster is an isolated environment, similar to a virtual warehouse in Snowflake. When you create a cluster, you choose the size of its compute resource allocation based on the work you need the cluster to do, whether ingesting data from a source, computing always-up-to-date query results, serving results to clients, or a combination.
In this case, you’ll create a dedicated cluster for ingesting source data from your SQL Server database.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, use the
CREATE CLUSTER
command to create the new cluster:CREATE CLUSTER ingest_sqlserver (SIZE = '200cc'); SET CLUSTER = ingest_sqlserver;
A cluster of size
200cc
should be enough to process the initial snapshot of the tables in your SQL Server database. For very large snapshots, consider using a larger size to speed up processing. Once the snapshot is finished, you can readjust the size of the cluster to fit the volume of changes being replicated from your upstream SQL Server database.
2. Start ingesting data
Now that you’ve configured your database network, you can connect Materialize to your SQL Server database and start ingesting data. The exact steps depend on your networking configuration, so start by selecting the relevant option.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, use the
CREATE SECRET
command to securely store the password for the SQL Server role you’ll use to replicate data into Materialize:CREATE SECRET sqlserver_pass AS '<PASSWORD>';
-
Use the
CREATE CONNECTION
command to create a connection object with access and authentication details for Materialize to use:CREATE CONNECTION sqlserver_connection TO SQL SERVER ( HOST <host>, PORT 1433, USER 'materialize', PASSWORD SECRET sqlserver_pass, DATABASE <database>, SSL MODE 'require' );
- Replace
<host>
with your SQL Server endpoint, and<database>
with the database you’d like to connect to.
- Replace
-
Use the
CREATE SOURCE
command to connect Materialize to your SQL Server instance and start ingesting data:CREATE SOURCE mz_source FROM SQL SERVER CONNECTION sqlserver_connection FOR ALL TABLES;
-
By default, the source will be created in the active cluster; to use a different cluster, use the
IN CLUSTER
clause. -
To ingest data from specific tables use the
FOR TABLES (<table1>, <table2>)
options instead ofFOR ALL TABLES
. -
To handle unsupported data types, use the
TEXT COLUMNS
orEXCLUDE COLUMNS
options. Check out the reference documentation for guidance.
-
-
After source creation, you can handle upstream schema changes by dropping and recreating the source.
-
In the SQL Shell, or your preferred SQL client connected to Materialize, use the
CREATE CONNECTION
command to create an AWS PrivateLink connection:↕️ In-region connections
To connect to an AWS PrivateLink endpoint service in the same region as your Materialize environment:
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK ( SERVICE NAME 'com.amazonaws.vpce.<region_id>.vpce-svc-<endpoint_service_id>', AVAILABILITY ZONES ('use1-az1', 'use1-az2', 'use1-az4') );
-
Replace the
SERVICE NAME
value with the service name you noted earlier. -
Replace the
AVAILABILITY ZONES
list with the IDs of the availability zones in your AWS account. For in-region connections the availability zones of the NLB and the consumer VPC must match.To find your availability zone IDs, select your database in the RDS Console and click the subnets under Connectivity & security. For each subnet, look for Availability Zone ID (e.g.,
use1-az6
), not Availability Zone (e.g.,us-east-1d
).
↔️ Cross-region connections
To connect to an AWS PrivateLink endpoint service in a different region to the one where your Materialize environment is deployed:
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK ( SERVICE NAME 'com.amazonaws.vpce.us-west-1.vpce-svc-<endpoint_service_id>', -- For now, the AVAILABILITY ZONES clause **is** required, but will be -- made optional in a future release. AVAILABILITY ZONES () );
-
Replace the
SERVICE NAME
value with the service name you noted earlier. -
The service name region refers to where the endpoint service was created. You do not need to specify
AVAILABILITY ZONES
manually — these will be optimally auto-assigned when none are provided.
-
-
Retrieve the AWS principal for the AWS PrivateLink connection you just created:
SELECT principal FROM mz_aws_privatelink_connections plc JOIN mz_connections c ON plc.id = c.id WHERE c.name = 'privatelink_svc';
principal --------------------------------------------------------------------------- arn:aws:iam::664411391173:role/mz_20273b7c-2bbe-42b8-8c36-8cc179e9bbc3_u1
-
Update your VPC endpoint service to accept connections from the AWS principal.
-
If your AWS PrivateLink service is configured to require acceptance of connection requests, manually approve the connection request from Materialize.
Note: It can take some time for the connection request to show up. Do not move on to the next step until you’ve approved the connection.
-
Validate the AWS PrivateLink connection you created using the
VALIDATE CONNECTION
command:VALIDATE CONNECTION privatelink_svc;
If no validation error is returned, move to the next step.
-
Use the
CREATE SECRET
command to securely store the password for thematerialize
SQL Server user you created:CREATE SECRET sql_server_pass AS '<PASSWORD>';
-
Use the
CREATE CONNECTION
command to create another connection object, this time with database access and authentication details for Materialize to use:CREATE CONNECTION sql_server_connection TO SQL SERVER ( HOST <host>, PORT 1433, USER 'materialize', PASSWORD SECRET sql_server_pass, SSL MODE REQUIRED, AWS PRIVATELINK privatelink_svc );
-
Replace
<host>
with your RDS endpoint. To find your RDS endpoint, select your database in the RDS Console, and look under Connectivity & security.-
Replace
<database>
with the name of the database containing the tables you want to replicate to Materialize.- Use the
CREATE SOURCE
command to connect Materialize to your RDS instance via AWS PrivateLink and start ingesting data:
CREATE SOURCE mz_source FROM SQL SERVER CONNECTION sql_server_connection FOR ALL TABLES;
By default, the source will be created in the active cluster; to use a different cluster, use the
IN CLUSTER
clause. To ingest data from specific schemas or tables, useFOR SCHEMAS(<schema1>,<schema2>)
orFOR TABLES (<table1>, <table2>)
instead ofFOR ALL TABLES
. AWS IAM authentication is also available, see theCREATE CONNECTION
command for details. - Use the
-
-
-
In the SQL Shell, or your preferred SQL client connected to Materialize, use the
CREATE CONNECTION
command to create an SSH tunnel connection:CREATE CONNECTION ssh_connection TO SSH TUNNEL ( HOST '<SSH_BASTION_HOST>', PORT <SSH_BASTION_PORT>, USER '<SSH_BASTION_USER>' );
-
Replace
<SSH_BASTION_HOST>
and<SSH_BASTION_PORT
> with the public IP address and port of the SSH bastion host you created earlier. -
Replace
<SSH_BASTION_USER>
with the username for the key pair you created for your SSH bastion host.
-
-
Get Materialize’s public keys for the SSH tunnel connection:
SELECT * FROM mz_ssh_tunnel_connections;
-
Log in to your SSH bastion host and add Materialize’s public keys to the
authorized_keys
file, for example:# Command for Linux echo "ssh-ed25519 AAAA...76RH materialize" >> ~/.ssh/authorized_keys echo "ssh-ed25519 AAAA...hLYV materialize" >> ~/.ssh/authorized_keys
-
Back in the SQL client connected to Materialize, validate the SSH tunnel connection you created using the
VALIDATE CONNECTION
command:VALIDATE CONNECTION ssh_connection;
If no validation error is returned, move to the next step.
-
Use the
CREATE SECRET
command to securely store the password for thematerialize
SQL Server user you created:CREATE SECRET sql_server_pass AS '<PASSWORD>';
For AWS IAM authentication, you must create a connection to AWS. See the
CREATE CONNECTION
command for details. -
Use the
CREATE CONNECTION
command to create another connection object, this time with database access and authentication details for Materialize to use:CREATE CONNECTION sql_server_connection TO SQL SERVER ( HOST '<host>', SSH TUNNEL ssh_connection );
- Replace
<host>
with your SQL Server endpoint.
- Replace
-
Use the
CREATE SOURCE
command to connect Materialize to your SQL Server instance and start ingesting data:CREATE SOURCE mz_source FROM SQL SERVER CONNECTION sql_server_connection FOR ALL TABLES;
-
By default, the source will be created in the active cluster; to use a different cluster, use the
IN CLUSTER
clause. -
To ingest data from specific schemas or tables, use the
FOR SCHEMAS (<schema1>,<schema2>)
orFOR TABLES (<table1>, <table2>)
options instead ofFOR ALL TABLES
. -
To handle unsupported data types, use the
TEXT COLUMNS
orIGNORE COLUMNS
options. Check out the reference documentation for guidance.
-
3. Right-size the cluster
After the snapshotting phase, Materialize starts ingesting change events from
the SQL Server replication stream. For this work, Materialize generally
performs well with a 100cc
replica, so you can resize the cluster
accordingly.
-
Still in a SQL client connected to Materialize, use the
ALTER CLUSTER
command to downsize the cluster to100cc
:ALTER CLUSTER ingest_sqlserver SET (SIZE '100cc');
Behind the scenes, this command adds a new
100cc
replica and removes the200cc
replica. -
Use the
SHOW CLUSTER REPLICAS
command to check the status of the new replica:SHOW CLUSTER REPLICAS WHERE cluster = 'ingest_sqlserver';
cluster | replica | size | ready -------------------+---------+--------+------- ingest_sqlserver | r1 | 100cc | t (1 row)
D. Explore your data
With Materialize ingesting your SQL Server data into durable storage, you can start exploring the data, computing real-time results that stay up-to-date as new data arrives, and serving results efficiently.
-
Explore your data with
SHOW SOURCES
andSELECT
. -
Compute real-time results in memory with
CREATE VIEW
andCREATE INDEX
or in durable storage withCREATE MATERIALIZED VIEW
. -
Serve results to a PostgreSQL-compatible SQL client or driver with
SELECT
orSUBSCRIBE
or to an external message broker withCREATE SINK
. -
Check out the tools and integrations supported by Materialize.
Considerations
Schema changes
Materialize supports schema changes in the upstream database as follows:
Compatible schema changes
-
Adding columns to tables. Materialize will not ingest new columns added upstream unless you use
DROP SOURCE
to first drop the affected subsource, and then add the table back to the source usingALTER SOURCE...ADD SUBSOURCE
. -
Dropping columns that were added after the source was created. These columns are never ingested, so you can drop them without issue.
-
Adding or removing
NOT NULL
constraints to tables that were nullable when the source was created.
Incompatible schema changes
All other schema changes to upstream tables will set the corresponding subsource into an error state, which prevents you from reading from the source.
To handle incompatible schema changes, use DROP SOURCE
and ALTER SOURCE...ADD SUBSOURCE
to first drop the
affected subsource, and then add the table back to the source. When you add the
subsource, it will have the updated schema from the corresponding upstream
table.
Supported types
Materialize natively supports the following SQL Server types:
tinyint
smallint
int
bigint
real
double
bit
decimal
numeric
money
smallmoney
char
nchar
varchar
nvarchar
sysname
binary
varbinary
json
date
time
smalldatetime
datetime
datetime2
datetimeoffset
uniqueidentifier
Replicating tables that contain unsupported data types is possible via the EXCLUDE COLUMNS
option for the
following types:
text
ntext
image
varchar(max)
nvarchar(max)
varbinary(max)
Columns with the specified types need to be excluded because SQL Server does not provide the “before” value when said column is updated.
Timestamp Rounding
The time
, datetime2
, and datetimeoffset
types in SQL Server have a default
scale of 7 decimal places, or in other words a accuracy of 100 nanoseconds. But
the corresponding types in Materialize only support a scale of 6 decimal places.
If a column in SQL Server has a higher scale than what Materialize can support, it
will be rounded up to the largest scale possible.
-- In SQL Server
CREATE TABLE my_timestamps (a datetime2(7));
INSERT INTO my_timestamps VALUES
('2000-12-31 23:59:59.99999'),
('2000-12-31 23:59:59.999999'),
('2000-12-31 23:59:59.9999999');
– Replicated into Materialize
SELECT * FROM my_timestamps;
'2000-12-31 23:59:59.999990'
'2000-12-31 23:59:59.999999'
'2001-01-01 00:00:00'
Snapshot latency for inactive databases
When a new Source is created, Materialize performs a snapshotting operation to sync the data. However, for a new SQL Server source, if none of the replicating tables are receiving write queries, snapshotting may take up to an additional 5 minutes to complete. The 5 minute interval is due to a hardcoded interval in the SQL Server Change Data Capture (CDC) implementation which only notifies CDC consumers every 5 minutes when no changes are made to replicating tables.