The next generation is here!More

The Streaming Database You Already Know How to Use

Materialize is a fast, distributed SQL database built on streaming internals.

PostgreSQL API

Streaming Joins

Separation of Storage & Compute

Strict Serializability

Trusted by data and engineering teams
Key Features

Take a modern cloud database, swap the query engine for a stream processor.

Message Brokers
SQL Control Plane
Compute (Clusters)
In-Memory Indexes
Storage (S3)
Source Data, Tables,
Materialized Views
SQL Reads
Message Brokers

Streaming Data & CDC Input Sources

Materialize continually eagerly consumes data from upstream PostgreSQL and Kafka sources.

Separate Storage & Compute

Data is durably stored in S3, while compute scales independently.

Incremental Computation Engine

Queries are parsed to dataflows, results are incrementally updated as input data changes.

Active-Active Replication

Run replicated compute clusters to increase availability and reduce downtime.

Presents as PostgreSQL

Manage and query Materialize using any Postgres driver or compatible tool.

Built for Joins on Streaming Data

Support multi-way, complex joins across real-time streams and static tables, in standard SQL.

Computation-Free Low-Latency Reads

Query results can be maintained in memory, making read latency similar to Redis.

Event-Driven Primitives

Build event-driven architectures by subscribing to changes in a query, or sink changes to Kafka.
Use Cases

What Can You Build with Materialize?

Real-Time Analytics

Use the same ANSI SQL from data warehouses to build real-time views that serve internal and customer-facing dashboards, APIs and apps.
Read More

Automation and Alerting

Build user-facing notifications, fraud and risk models, and automated services using event-driven SQL primitives in a streaming database.
Read More

Segmentation and Personalization

Build engaging experiences with customer data aggregations that are always up-to-date: personalization, recommendations, dynamic pricing and more.
Read More

ML Ops

Power online feature stores with continually updated data, monitor and react to changes in ML effectiveness - all in standard SQL.
Read More

Get the Technical 101 on Materialize

Want to learn more? We’ll send you everything you need to know, including initial concepts, getting started guides, and an overview of the internals powering a streaming database.
Why Materialize?

Built from the ground up to solve the hard problems in data.

Strong Consistency

Strong Consistency Guarantees for Streaming Data

Materialize provides correct and consistent answers with minimal latency - not approximate answers or eventual consistency. With strictly-serializable computation, Materialize always delivers answers with the correct result on some specific (and recent) version of your data.
Why consistency is important in Streaming →
Streaming Joins

Multi-way and Cross-Stream Joins

Whereas other systems require ahead-of-time denormalization or round-trip processing for joins, Materialize offers low-latency support for multi-way joins and complex transformations. Write the same kind of complex SQL queries you would use on a traditional data warehouse - and get real-time results.

Joins in Materialize →
Subscribe to SQL

Event-Driven Primitives: Sink and Subscribe

Because Materialize is built on streaming internals, users can stream results out of the database without performance limits using two new primitives:

  • SQL clients can use SUBSCRIBE to get pushed incremental updates to results instead of polling.
  • Kafka users can create SINKS to push changes to results in a view out to a topic.
Subscribe to changes in a view →
PG Wire compatibility

PostgreSQL Up Front, Timely Dataflow Underneath

Materialize offers an easy SQL interaction layer to a stack of powerful stream processing engines - Timely Dataflow and Differential Dataflow. Already used in correctness-critical global production deployments by Fortune 100 companies, these battle-tested systems avoid many of the shortcomings of other approaches to stream processing.

Works with Your Existing Data Stack


Streaming infrastructure is not required to use Materialize for real-time computation. Connect directly to any Postgres database via CDC and continually ingest data as it changes.
View Postgres Docs


Run your existing dbt models on top of streaming data in Materialize, and dbt persists a materialized view. No matter how much or how frequently your data arrives, your model will stay up to date.
View dbt Docs


Connect multiple Kafka topics to Materialize and easily explore, transform, and join streaming datasets - and sink maintained SQL query results downstream to new, enriched Kafka topics.
View Kafka Docs
Trusted By Data Teams
Emily Hawkins
Emily HawkinsData Infrastructure Lead, Drizly
We can write real-time SQL, exactly the same way as we already are in Snowflake with batch.
See how Drizly uses Materialize →
Ryan Gaus
Ryan GausStaff Engineer and Tech Lead, Density
Materialize has saved us I-don’t-know-how-many untold quarters of trying to build our own thing.
See how Density uses Materialize →
Jean-Francois Perreton
Jean-Francois PerretonHead of Algo Quant, Kepler Chevreaux

Materialize directly integrates with our third-party applications, BI tools, you name it. It’s really SQL.

See how Kepler uses Materialize →

Think Declaratively, Act Incrementally

Materialize helps you get access to the power of a stream processing engine, with the simplicity of a PostgreSQL-compatible developer interface.


Millisecond-level latency through incrementally-updated materialized views.


Control it with ANSI-standard SQL. Connect with Postgres drivers.


Support for multi-way joins, subqueries, upserts, deletes, CTEs.
Materialize Cloud

Sign Up for Early Access

Register for early access to start building real-time analytics dashboards and live applications.