Materialize Pricing
Managed Cloud
Usage-based. Designed for a fully-managed, frictionless experience.
$1.5 /
Self-Managed (Early Access)
License-based. Designed for highly regulated environments and flexible deployments.
Get Started with the Materialize Emulator
The Materialize Emulator is an all-in-one Docker image, offering the fastest way to get hands-on experience with Materialize for local development.
Download NowTrusted by data and engineering teams
Pricing FAQ
What size clusters do I need for my workload?
What size clusters do I need for my workload?
Resource demands of individual workloads is highly dependent on three factors below, so we recommend finding the right size by initially running on 400cc
clusters and then checking the resource utilization metrics. Up and downscaling might incur downtime. The three factors that most heavily influence the resource demands of a Materialize workload are:
- Size and type of dataset - Streaming input Sources for larger datasets may need to run on larger cluster replicas, and high-cardinality data kept in storage will require larger clusters to process and index.
- Throughput of changes - Changes (Updates/Inserts/Deletes) are what trigger work in Materialize, so the more often data is changing, the more computation work Materialize does.
- Quantity and complexity of transformations - Like in a traditional database, the amount of resources needed to compute a specific SQL query can vary dramatically based on the joins, aggregations, window queries, CTE’s, subqueries, and computations needed.
How many clusters do I need?
How many clusters do I need?
Clusters can be used strategically to isolate your workloads from failure in several ways.
- Separation of Source ingest responsibilities - Streaming input Sources run on their own cluster (either one source per cluster, or multiple sources on a cluster.)
- Separation of use cases - Teams can isolate separate use cases (e.g. Reporting, Feature Serving, Alerting) to ensure changes to one don’t affect the others.
- Separation of dev/stage/prod - Teams can have smaller separate clusters for development and staging work.
- Separation of compute stages - Teams can use materialized views to write outputs of a SQL transformation down to storage, and then pull the results back up into new clusters for more computation or serving. This allows for creation of pipelines that can be architected to continue to serve stale results in the case of an upstream failure. Further reading: Clusters Explained
How can I forecast storage costs?
How can I forecast storage costs?
Storage is roughly proportional to the size of your source datasets plus the size of materialized views, but more broadly, two factors lead to storage costs making up a very small percentage of real-world Materialize invoices:
- Materialize uses cheap, scalable object storage (currently S3 on AWS) as a storage layer, and primarily passes the cost through to the customer, At a rate of
$0.0000411
per GB/hr, 1TB stored for one month (730hrs) equates to $30 USD. - With the exception of Append-only sources, most data in Materialize is continually compacted, so the total state stored in Materialize tends to grow at a rate more similar to OLTP databases than traditional cloud data warehouses.
What are the terms of the Free Trial?
What are the terms of the Free Trial?
See Free Trial FAQs
How can I buy through AWS Marketplace?
How can I buy through AWS Marketplace?
If you have already committed spend in AWS, you can put it towards Materialize credits through the AWS marketplace. Get in touch here and we can help you through the process.