In our recent workload performance analysis we examined how Materialize achieves 100x greater throughput and 1000x lower latency than Aurora PostgreSQL, at a comparable price, when used as a read replica for workloads with non-trivial business logic. In this post, we’re going to expand our previous performance analysis by providing a cost-savings comparison for Materialize and Aurora PostgreSQL, for a comparable level of performance.

Here’s the big headline: Materialize costs 1/20th what Aurora PostgreSQL read replicas cost, when you have non-trivial business logic. We’ll spend the post unpacking this conclusion, and what needs to be true to get here.

The tl;dr is that if your queries are any more complicated than SELECT * FROM table WHERE key = <literal>, you may want to re-evaluate using an OLTP read replica like Aurora.

Starting Context: Your OLTP Data & How to Use It

OLTP databases excel at keeping your data up to date, and maintaining transactional consistency properties as updates happen. They are also great at maintaining indexes over this data, from which they can answer direct read queries, and efficiently tackle some types of joins.

OLTP databases are less well equipped to compute and maintain complex business logic over your data. It’s one thing to want to read out a customer’s address, or their three most recent purchases.

It’s another thing to want to know as soon as ten customers have the same set of three most recent purchases, an indication of an emergent commercial trend or potentially of fraud. OLTP databases can process complex business logic expressed as SQL, but it’s not clear that they are well equipped to do this.

One common approach is to spin up read replicas: mirrored versions of the OLTP database that scale the amount of work you can do, without interfering with the primary database. The read replicas sound like a great fix, you can spin up as many as you need, and it’s almost as if you were using your primary. But just because they are readily available doesn’t mean they are economical. You still have the OLTP core doing the work for you, and it’s unclear if this is the best approach.

Let’s dive in and see how Aurora read replicas handle complex business logic, and compare it to Materialize.

Workload Performance Analysis: Materialize Delivers Significant Cost Reductions

Our previous workload performance analysis blog focused on dynamic pricing for an online retailer. We compared a db.r6gd.xlarge Aurora PostgreSQL read replica against a 50cc Materialize cluster.

For this blog, we tested a brokerage scenario, which involves a more complex and high-volume workload. We compare Materialize (with a modest 50cc cluster instance) to Aurora PostgreSQL on both a price-comparable db.r6gd.xlarge instance and a larger (but not even performance-comparable) db.r6id.24xlarge instance.

For our performance analysis, we conducted identical workloads across all instances under controlled conditions at the primary database. Our tests involved 10 concurrent connections each executing a continual stream of inserts, updates, and deletes against a primary database node, which was Aurora in all cases.

Meanwhile, we measured the execution time of data-intensive read-only queries on the Aurora read replicas and on Materialize. These queries included identifying top-performing securities, providing customer portfolio overviews, summarizing pending orders, and calculating order fulfillment rates. These are representative key operations for a brokerage managing high volumes of live data.

Let’s dive into the performance analysis to learn how we arrived at the headline numbers, along with other insights. Our three configurations each have different costs, at the time of writing:

  • A 50cc Materialize cluster in us-east-1 costs $0.75 per hour.
  • A db.r6gd.xlarge instance in us-east-1 is $0.514 per hour.
  • The db.r6id.24xlarge costs $14.314 per hour — 28x the cost of the smaller Aurora instance, and almost 20x the cost of the Materialize cluster.

These costs aren’t exactly lined up, but we’ll see that the performance differences are stark enough to make the fine detail here irrelevant. The performance differences are dramatic indeed:

  • Materialize achieves 65.53 read transactions per second and a p99 latency of 0.45 seconds.

  • Aurora on the smaller db.r6gd.xlarge instance serves only 0.63 transactions per second, with a p99 latency of 60 seconds.

  • Aurora on the larger db.r6id.24xlarge instance serves only 10 transactions per second, with a p99 latency of 7.4 seconds.

As you can see, even spending nearly 20x as much, we didn’t even find an Aurora instance size that can keep up with Materialize.

Conclusions

An OLTP database like Aurora excels at maintaining your data, and indexes over your data. It does not excel at maintaining non-trivial business logic over your data.

Materialize is designed from the ground up to both compute and maintain complex SQL. Rather than perform from-scratch work on every read query, Materialize incrementally maintains the result on each data update. This approach gives it, and you, a dramatic advantage in throughput, latency, and cost.

If you’d like to try out Materialize, you can sign up for a free trial or sign up for a demonstration.

More Articles

Technical Article

Materialize + Novu: Real-Time Alerting Powered by a Cloud Operational Data Store

In the following blog, we’ll show you how to create real-time alerts using Materialize’s integration with Novu.
Josh Arenberg

Sep 12, 2024

Key Concept

What Happened to the Operational Data Store?

And what killed it?
Arjun Narayan

Aug 21, 2024

Key Concept

Incremental View Maintenance Replicas: Improve Database Stability and Accelerate Workloads

IVMRs can deliver 1000x performance for read-heavy workloads, without losing freshness, and do so at a fraction of the price of a traditional replica.

Nate Stewart

Aug 14, 2024

Try Materialize Free