The Streaming Database You Already Know How to Use

Materialize is a fast, distributed SQL database built on streaming internals.

Trusted by data and engineering teams

Consistency, Scalability, Low Latency: Pick Three

The stream processor at the heart of Materialize proactively computes results with strong consistency and sub-second latency — all in a distributed, cloud-native architecture that allows for unbounded scale.

SQL Control Plane
Compute (Clusters)
In-Memory Indexes
Dataflows
Storage (S3)
Source Data, Tables,
Materialized Views
SQL Reads
Message Brokers

Key Features

Presents as PostgreSQL

Manage and query Materialize using any Postgres driver or tool.

Streaming Inputs

Pull in streams of data from Kafka or stream from Postgres via replication.

Built for JOINs

Multi-way, complex join support across real-time streams - all in standard SQL.

Separate Storage & Compute

Data is stored cheaply, compute scales independently and without limits.

Incremental Compute Engine

Instead of re-computing on every query, results are updated as data changes.

Active Replication

Use replication to increase availability, reduce downtime, scale seamlessly.

Low-Latency Reads

Results can be maintained in memory, making read latency similar to Redis.

Event-Driven Primitives

Sink changes out to Kafka, or subscribe to query updates in standard Postgres.

What Can You Build with Materialize?

Data Engineers and Developers use Materialize as a better engine for cutting-edge data products.

Real-Time Analytics

Real-Time Analytics

Use the same ANSI SQL from data warehouses to build real-time views that serve internal and customer-facing dashboards, APIs and apps.

Automation and Alerting

Automation and Alerting

Build user-facing notifications, fraud and risk models, and automated services using event-driven SQL primitives in a streaming database.

Segmentation and Personalization

Segmentation and Personalization

Build engaging experiences with customer data aggregations that are always up-to-date: personalization, recommendations, dynamic pricing and more.

ML Ops

ML Ops

Power online feature stores with continually updated data, monitor and react to changes in ML effectiveness - all in standard SQL.

Why Materialize?

Built from the ground up to solve the hard problems in data.

Strong Consistency Guarantees for Streaming Data

Materialize guarantees strict serializable consistency that is standard in databases, but unprecendented in distributed stream processors.

Consistency in Streaming →
Strong Consistency

Multi-way and Cross-Stream Joins

Write the same kinds of complex, multi-way SQL joins you would use on a traditional data warehouse. Materialize maintains the joins efficiently in memory and serves real-time results.

Joins in Materialize →
Streaming Joins

Event-Driven Primitives: Sink and Subscribe

Push updates out of the database without performance limits using two new primitives:

  • SQL clients can use SUBSCRIBE to get pushed incremental updates to results instead of polling.
  • Kafka users can create SINKS to push changes to results in a view out to a topic.

Subscribe to changes in a view →
Subscribe to SQL

PostgreSQL Up Front, Timely Dataflow Underneath

Materialize expands access to powerful stream processing capabilities of Timely and Differential Dataflow by wrapping them in a familiar and accessible SQL layer that is wire-compatible with Postgres.

Postgres Compatibility Explainer →
PG Wire compatibility Dataflow. Already used in correctness-critical global production deployments by Fortune 100 companies, these battle-tested systems avoid many of the shortcomings of other approaches to stream processing.

Get the Technical 101 on Materialize

Want to learn more? We’ll send you everything you need to know.

Works with Your Existing Data Stack

PostgreSQL

PostgreSQL

Streaming infrastructure is not required to use Materialize for real-time computation. Connect directly to any Postgres database via CDC and continually ingest data as it changes.

dbt

dbt

Run your existing dbt models on top of streaming data in Materialize, and dbt persists a materialized view. No matter how much or how frequently your data arrives, your model will stay up to date.

Kafka

Kafka

Connect multiple Kafka topics to Materialize and easily explore, transform, and join streaming datasets - and sink maintained SQL query results downstream to new, enriched Kafka topics.

Trusted By Data Teams

Think Declaratively, Act Incrementally

Materialize helps you get access to the power of a stream processing engine, with the simplicity of a PostgreSQL-compatible developer interface.

Fast

Fast

Millisecond-level latency through incrementally-updated views.

Familiar

Familiar

Control it with ANSI-standard SQL. Connect with Postgres drivers.

Fully-Featured

Fully-Featured

Support for multi-way joins, subqueries, upserts, deletes, CTEs.

Sign Up for Early Access

Register for early access to start building real-time analytics dashboards and live applications.

© 2022 Materialize, Inc. Terms of Service