In this post, we’ll talk about how a new feature in Materialize makes data as fresh as if you were using your primary database. And Materialize’s reaction time, from input data change to query output update, can perhaps counter-intuitively be even faster than when you’re using your primary. You can hear about results from Materialize before your primary can figure out what changed.

If that sounds surprising, it’s meant to be. But it’s also true, so let’s get to explaining things.

Preliminary terms and concepts

Let’s start with unpacking freshness, responsiveness, and reaction time. These aren’t ironclad terms, but we’ll define them here in terms of three consequential real-world moments in your interaction with a database.

  1. C: The moment you issue the command,
  2. R: The moment you receive a response,
  3. V: The moment reflected by the response.

You probably feel the C and R moments most viscerally. The C moment is when you press return on your console, and the R moment is when the answer shows up in front of you. The V moment is also critical for understanding freshness and reaction time, though, and it’s not always related to C and R.

There are a few intervals of time I think about:

  1. Response time (from C to R). How long does it take to answer your question?

    A primary database, replica, or cloud data warehouse likely starts work when only when the query arrives, and it may have a lot of work to do. On the other hand, a cache may be able to return almost immediately, without any query processing.

  2. Replication lag (from V to C). How long does it take new data to reach your system?

    At the moment that you press enter, how stale are the data at the queried system? This may be zero if you are asking your primary database, or they may be tremendous if you are replicating to a cloud data warehouse. If you are using a cache, this might be a configuration parameter you can set (though likely not bring to zero).

  3. Reaction time (from V to R). How long does it take for new data to influence an answer?

    This often looks like a sum of the previous two intervals. If replication and command processing are both ongoing, and commands are executed against the currently replicated data, then both staleness and latency contribute to the reaction time. Curiously, in streaming systems this can actually be smaller than the other intervals, because data updates are autonomously converted to responses without awaiting a corresponding command, but we won’t go in to that here.

It’s often easy to bring either response time or replication lag close to zero, when done in isolation. What’s fundamentally hard to do, and often more important, is to reduce the critical path of reaction time: the delay from newly arrived data to the correspondingly updated query outputs.

Conventional systems

When you use a strict serializable system, V must lie between C and R: each response is required to reflect a state of the database between the command and the response. This means that if the response time (C to R) is small, your reaction time (V to R) is great too. If you aren’t using your primary database, you probably aren’t getting strict serializability.

When you use a (non-strict) serializable system, V may come before C and R. In this case, a fast response time may not indicate a fast reaction time. You may get results quickly, but if they don’t reflect reality, you’ll need to ask again. And of course, by the time you get those answers, they are already out of date. The time you need in order to react to new data can be large, even in a responsive system.

When you use a system with weaker consistency guarantees, the whole framing of C, V, and R may not even apply. Eventually consistent systems may provide an answer that is incorrect for all times. Some caches and streaming systems may not even be able to speak in terms of commands, validity, or responses. This doesn’t mean they are inherently bad, but you’ll need to find another framework for reasoning about the confidence with which you should act on their responses.

When you use your primary database, you may have the option of strict serializability, serializability, or even weaker isolation levels. Most other solutions provide non-strict serializability. The classic example is a read replica, which uses the replication log of the primary to populate and maintain a secondary, with some amount of replication lag. Farther out there, you could replicate data out to a data warehouse, which usually introduces enough replication lag that the concept of “reaction time” shows up mostly in post-mortems: times are in hours, or days.

Introducing Materialize

Like many other systems, Materialize replicates your data off of the primary’s replication log. However, it has a few tricks up its sleeves that change the reaction time equations.

Materialize’s first trick is the subject of this post. Materialize is able to get the replication lag (from V to C) down to zero. It does this by ensuring that V comes after C. When you issue a command at C, Materialize can transact against the upstream primary to learn the current state of the replication log V, and then ensure that its response at R reflects at least everything through V.

It’s a surprisingly simple strategy to remove replication lag: just…wait out the lag.

It’s not as popular a strategy as you might think. In most systems, you first wait out the replication lag (V to C) and then wait out the response time (C to R), meaning you end up with a reaction time that’s no better and a worse response time to boot. You can see a form of this approach with “read your writes” and “causal” consistency levels: you can use a moment in the replication log you have heard of to insist that your reads reflect at least that moment. But performance-wise, waiting out the lag is usually painful for interactive uses on existing technologies.

Materialize’s second trick is what turns this reasoning on its head.

Materialize both computes and incrementally maintains query results. It does not have to first wait out the replication lag, and then start query processing. Materialize can start the query immediately with what data it has, and update the results as the necessary bits of the replication log stream in. The time taken for a new query is roughly the maximum of the time from V to C and the time from C to R, rather than their sum.

This change becomes more dramatic the more of your business logic you move to SQL views. Like with queries, Materialize can compute and incrementally maintain views. However, unlike commands, Materialize will maintain views even without queries that use them. This removes the query processing from the critical path, and replaces it with query maintenance, which can be much smaller. When queries arrive that read from these views, so much of the work is done already that often it’s just a matter of waiting out the replication lag. The reaction time can be as little as the time to confer with the primary to confirm the maintained result is correct.

This brings us to a perhaps surprising conclusion: Materialize can provide both zero replication lag and a faster reaction time than the primary itself. While the primary has zero replication lag, the response times of OLTP databases are not always great, especially for complex queries. Although Materialize does lag behind the primary, it can wait this lag out and then update its results all while the primary is still working on the query. You get the freshness of working on the primary, combined with the response time of a cache.

Take aways

Materialize’s mission is to empower organizations to act confidently on fast-changing data. Our goal is to ruthlessly cut the time between an event in the real world, and you understanding its consequences. The reaction time of a system measures how long new data takes to be reflected in query outputs, and is how we judge ourselves.

The framing of “time to confident action” calls out limitations in using either response time or replication lag alone. Systems that optimize one or the other may not combine to provide information that is both prompt and actionable. Additionally, when we think hard about how to reduce the reaction time, we come up with new architectures like Materialize. Do as much work as you can ahead of time, to clear the decks for newly arrived data and the work needed to update results.

Materialize has native connectors to OLTP databases like PostgreSQL and MySQL. You can sign up for a free trial to see if Materialize provides you with the reaction time associated with exclusive access to your primary database, but faster.

Try Materialize Free