Deep-diveJoin Kafka with a Database using Debezium and Materialize

The Problem We need to provide (internal or end-user) access to a view of data that combines a fast-changing stream of events from Kafka with a table from a database (which is also changing). Here are a few real-world examples where this problem comes up: Calculate API usage by joining API logs in Kafka with […]

Use CaseReal-time A/B test results with Segment, Kinesis, and Materialize

Introduction This is meant primarily to demonstrate how the Segment + Kinesis + Materialize stack can create new capabilities around querying, joining, and ultimately materializing real-time views of customer-centric data. In this case, we’re using A/B testing analytics as the data. Why? There’s a set of well-known problems and hard-earned lessons that data-centric organizations go […]

ReleaseRelease: 0.7

Materialize 0.7 was released on 08 February 2021 with significant improvements around getting data into Materialize. Key change: Source data from Amazon Web Services S3 S3 sources for Materialize are fully tested but under the experimental flag until 0.8. With S3 sources, you can: Point Materialize at S3 buckets using the same CREATE SOURCE syntax […]

StreamingStreaming SQL: What is it, why is it useful?

Summary Streaming SQL is about taking the same declarative SQL used to write database queries, and instead running it on streams of fast-changing data. This is useful because: Data is often more valuable when you can act on it quickly The existing tools for deriving real-time insights from streams are too complex. The “declarative” nature […]

About This Blog

Welcome! On our blog, you’ll hear more about the inner workings of Materialize – what we’ve built, what we plan to build, and how it all works together.

New here? Read these