Changelog
New features and improvements landing in and around Materialize
Connection validation
Few things are more common than fat-fingering connection parameters. To avoid the frustrating loop of creating, dropping, and recreating your connections, Materialize now supports connection validation.
Two ways!
For most connection types, Materialize automatically runs the validation check on connection creation, so you get an error as soon as you run the DDL statement:
-- Say you mess up your PostgreSQL credentials and try to create a
-- connection
CREATE SECRET pg_password AS 'wr0nGpa$$w0rd';
CREATE CONNECTION pg_connection TO POSTGRES (
HOST 'instance.foo000.us-west-1.rds.amazonaws.com',
PORT 5432,
USER 'postgres',
PASSWORD SECRET pg_password,
SSL MODE 'require',
DATABASE 'postgres'
);
-- Not on our watch!
Error: db error: FATAL: password authentication failed for user "materialize": FATAL: password authentication failed for user "postgres"
For AWS PrivateLink
and SSH tunnel
connections, which require a more intricate set of configuration steps across
multiple systems, Materialize can’t perform this validation off the bat, but
allows you to manually validate the connection with the new
VALIDATE CONNECTION
syntax:
-- Once you're done configuring the AWS PrivateLink service and create a
-- connection
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc',
AVAILABILITY ZONES ('use1-az1', 'use1-az4')
);
-- Check if the setup is 👍, before using the connection to create
-- a source or sink
VALIDATE CONNECTION privatelink_svc;
If no rows are returned, you’re good to go! Configuration issues will lead to a validation error with details on where things went haywire.
Role-based access control (RBAC) 🔒
We’ve rolled out role-based access control (RBAC) so you (well, an admin) can define a hierarchy of roles and permissions for your organization. If you’re familiar with how RBAC works in PostgreSQL, Materialize largely follows the same principles — check the documentation for a refresher!
Say you want to onboard a Data Science team to your Materialize organization, but this team should only have access to a specific namespace and dedicated compute resources:
-- Create a role through which Data Science team members can inherit a specific
-- set of privileges
CREATE ROLE data_scientist;
-- Grant the Data Science team members the data_scientist role
GRANT data_scientist TO "gerald@dharma.com", "karen@dharma.com";
-- Create a dedicated namespace and spin up some compute resources for Data
-- Science team work
CREATE DATABASE actual_science;
CREATE CLUSTER for_science (SIZE = '3xsmall');
-- Grant the data_scientist role free reign over the actual_science database and
-- the for_science cluster
GRANT ALL PRIVILEGES ON DATABASE actual_science TO data_scientist;
GRANT ALL PRIVILEGES ON CLUSTER for_science TO data_scientist;
-- If a Data Science team member ever tries to do science elsewhere...
-- (e.g. running as gerald@dharma.com)
SELECT AVG(speed) FROM not_science.public.asteroids;
ERROR: permission denied for TABLE "not_science.public.asteroids"
Pretty standard, huh?
RBAC is also supported in the Materialize Terraform provider (v0.0.9+). If you’re using Terraform to manage region-level resources like connections, sources, sinks and others, you can now also manage roles, object ownership and permissions in an automated, source-controlled way.
PostgreSQL source: support adding and dropping tables
We get it: schemas change.
We’ve landed some quality-of-life improvements that make it easier to
handle schema evolution and replication errors in PostgreSQL sources.
By easier, we mean that you no longer have to drop and recreate the whole
source, but can instead use the new ALTER SOURCE...{ADD|DROP} TABLE
syntax to
patch a specific table (or set of tables).
As an example, if you add a important_col
column to the important_tbl
table
in your upstream PostgreSQL database, and want to make sure this new column
starts being ingested in the pg_source
PostgreSQL source in Materialize:
-- List all subsources in pg_source
SHOW SUBSOURCES ON pg_source;
-- Get rid of the outdated subsource
ALTER SOURCE pg_source DROP TABLE important_tbl;
-- Start ingesting the table with the updated schema
ALTER SOURCE pg_source ADD TABLE important_tbl;
The same approach works for incompatible schema changes that might cause one or more subsources to stall. For more details on how to handle each scenario, check the updated documentation.
DBeaver native driver
DBeaver users have long been able to connect to Materialize via the native
PostgreSQL connector, due to our pgwire
compatibility.
This just works™️, but can cause some confusion, as well as annoying errors that
pop up when you accidentally click a widget for features that don’t carry over
to Materialize (like triggers).
From DBeaver 23.1.3, you can connect to your Materialize region using the new native database driver:

If you’re already using Materialize with DBeaver, switch over for a smoother experience! 🦫 Check the updated documentation to connect.
New SQL shell 🐚
We’re lifting the veil on a shiny, new SQL shell that lets you interact with Materialize right in the web console! Neat, huh?
Although this isn’t a replacement for your everyday SQL development environment (at least for now), it’s just…right there. No more context switching just to learn your way around Materialize, run sanity checks, or pull up that data real quick. If you have any feedback, or just opinions about what the SQL shell should do but doesn’t, we’d love to hear about it!
Want to give the new SQL shell a go? Sign up for a 14-day free trial of Materialize.
Kafka source: improved JSON support
Handling JSON-encoded Kafka topics in Materialize isn’t fun. We’ve taken the
first step to improve JSON support in the Kafka source by introducing a new
FORMAT JSON
option
that decodes messages as jsonb
, saving you some conversion typing in
comparison to FORMAT BYTES
.
The old way
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'ch_anges')
FORMAT BYTES
WITH (SIZE = '3xsmall');
CREATE VIEW extract_json_source AS
SELECT
(data->>'field1')::boolean AS field_1,
(data->>'field2')::int AS field_2,
(data->>'field3')::float AS field_3
-- This is not fun
FROM (SELECT CONVERT_FROM(data, 'utf8')::jsonb AS data FROM json_source);
The new way
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'ch_anges')
FORMAT JSON
WITH (SIZE = '3xsmall');
CREATE VIEW extract_json_source AS
SELECT
(data->>'field1')::boolean AS field_1,
(data->>'field2')::int AS field_2,
(data->>'field3')::float AS field_3
-- This is slightly more fun!
FROM json_source;
The FORMAT JSON
option is already supported in the source creation UI,
and will land in the upcoming v0.1.0
release of the Terraform provider.
In the future, we plan to support automatically enforcing JSON schemas, both for
user-provided schemas and schemas managed via Confluent Schema Registry.
Cluster management (revisited)
Clusters are one of the very first concepts you run into when getting started with Materialize — you need compute resources to run your queries, right? We’ve simplified how clusters and their dedicated resources (aka replicas) are exposed, so they’re more intuitive to manage.
To create a new cluster, you can now simply specify its SIZE
and REPLICATION FACTOR
, instead of thinking about individual replicas for provisioning and
replication. Here’s a quick tour of how cluster management works from hereon:
-- Create a cluster with provisioned resources
CREATE CLUSTER demo SIZE = '3xsmall';
--Resize the cluster
ALTER CLUSTER demo SET (SIZE = 'small');
--Increase the replication factor for fault tolerance
ALTER CLUSTER demo SET (REPLICATION FACTOR=2);
-- Turn off the cluster for the night, to save $$!
ALTER CLUSTER demo SET (REPLICATION FACTOR=0);
You can still choose to manage the replicas in a cluster manually, for example to gracefully resize it without downtime, but the bottom line is: you shouldn’t have to! We’ll continue working on improving cluster management, and are eyeing features like zero-downtime resizing, autoscaling, and automatic rebalancing across availability zones for future releases. Head over to the documentation for the full rundown!
New source creation UI ✨
We’ve made it easier to connect to external data sources right in the web console: with just a few clicks, you can configure a connection and start ingesting data into Materialize using the new source creation UI. Look ma, no DDL:
For now, this workflow is restricted to PostgreSQL and Kafka sources that don’t require network security configuration. We’ll add support for configuring PrivateLink and SSH connections via the web console next! Until then, you can fire up a SQL client and head over to our documentation for step-by-step network security guides.
The Materialize Changelog is here!
A lot has changed since we rolled out the next generation of Materialize (v0.27). But how would you know? We’re starting a changelog to keep you up to speed with all new features and improvements landing in and around Materialize!