Until now, workloads that exceeded the available memory on a cluster would run into hard limits. That meant tough trade-offs: either scale up hardware (expensive), or re-engineer workloads to fit (time-consuming).

We’re excited to announce that we’ve introduced a new Materialize Cloud Cluster type: M.1 clusters. These clusters provide customers with more capacity, leading to better economics and performance; all while maintaining the same low latency requirements that Materialize offers. And, of course, without compromising correctness or consistency.

What You Can Expect

  • Bigger workloads, same freshness: Run multi-terabyte workloads on clusters with far less RAM than previously required, with observed p99 end-to-end latency of less than 1 second.
  • Predictable performance: When memory fills, Materialize intelligently spills cold data to disk, avoiding crashes and out-of-memory errors.
  • Seamless rollout: Materialize Cloud customers can easily begin using these clusters today simply by altering their cluster types.

The Results

As discussed in our Scaling Beyond Memory blog, we’re now able to spill most memory to disk before a cluster runs out of memory. After extensive testing, we can now announce that our new M.1 clusters will take advantage of a larger disk to memory ratio.

We were able to realize significant improvements in performance for Cloud customers’ existing workloads:

  • Larger Workloads: 3x larger workloads fit with the same amount of physical RAM.
  • Low latency: p99 end-to-end latency that’s under 1 second.
  • High responsiveness: Queries still respond within single digit milliseconds.

We’ve also observed that many customers have the opportunity to scale down existing clusters by utilizing M.1 clusters instead of our legacy sizes. Note that since M.1 clusters spill more to disk, hydration times can at times be longer than they were using legacy sizes. Users can mitigate this by utilizing autoscaling during deployment.

How to Guide

All Cloud customers now have access to these new clusters. Simply specify the new name when creating or altering your cluster.

Troubleshooting

Now that clusters are backed by swap, we no longer differentiate between memory and disk. They both just represent places to put bytes, which can go in all places. Going forward, users should only consider Memory Utilization as a whole. We’ve updated both our Console UI and underlying system catalog to account for this change.

Users should update their downstreaming alerting to ensure they’re being notified when clusters are nearing full Memory Utilization, as opposed to individual memory or disk metrics.

Pricing

To account for the additional capacity, credit prices for these new clusters have been adjusted. Please review our updated Pricing page with the new cluster sizes and credits/hour pricing.

You can also review our docs to see more details behind the resources.

We intend to sunset our legacy cluster types in the future. Contact support or your Account Executive for more information.

Conclusion

This change doesn’t just make Materialize more resilient—it expands the universe of workloads we can power. Whether you’re maintaining state across billions of events, running complex joins on massive tables, or standing up new operational applications that demand both scale and freshness, Materialize now adapts to your needs more flexibly than ever. We’re excited to see which types of use cases our customers will be able to support with these new, more cost efficient clusters.

If you have any questions about how this impacts your environment, please ask Matty (via the chatbot in the right hand corner of our website), contact support, or reach out to your Account Executive to be connected with our team.

For new customers, don’t hesitate to contact our team to schedule a demo, or start a free Cloud trial to test them out yourself.

Get Started with Materialize