Artificial intelligence is seeping into every customer interaction, but the experiences still feel canned. A model can write a paragraph that sounds human, yet it usually speaks from a frozen snapshot of reality. The shoes you wanted are already out of stock, the flight you booked moved gates ten minutes ago, and the discount code the bot cheerfully recommends expired yesterday.

The missing ingredient is fresh, structured context. Operational data that changes moment‑to‑moment. Without it, Large Language Models (LLMs) hallucinate, waffle, or simply offer the wrong answer. And while Retrieval‑Augmented Generation (RAG) has become the default way to bolt knowledge onto a model, RAG assumes you can pre‑index the world. You cannot pre‑index a warehouse inventory that drains and refills every hour, a user’s evolving loyalty status, or the price of a volatile stock.

To serve dynamic customer experiences such as “Yes, you can still get same‑day delivery if you order in the next seventeen minutes,” we need a new foundation. Operational Data Products. Materialize turns the views you already write into those products, exposing them as callable, versionable tools that models can trust. This post explains why that shift matters and how we’re making it automatic.

Operational Data Products, Not Queries

Most applications do not need creative query generation and unlimited access to a database. They need authoritative answers to canonical questions. How many loyalty points do I have right now? Which conference rooms are free at 2 PM? Is order #8126 still scheduled for tonight’s truck run? These questions map to core business concepts that rarely change, even as underlying schemas evolve and the data underpinning them is constantly changing. That stability is what lets teams build software they can reason about.

Operational Data Products capture that stability. They are named, versioned, incrementally maintained views that surface the truth of a single business concept. Each one behaves like a composable API contract. Continuously up‑to‑date, provably correct, and stable even as you add or modify downstream columns. Because they are views, not ad‑hoc queries, they can be documented, tested, and observed like any other piece of software.

Why RAG Falls Short for Dynamic Context

RAG is great for unstructured text, things like product descriptions, airline reviews, or policy manuals. It retrieves documents and lets the model summarize them. But dynamic decisions depend on structured facts that not only mutate faster than any crawler can keep up with, they often have to be computed at the exact moment of the request.

You could, in theory, rebuild the vector index and rerun every aggregation pipeline every time an input changes. In practice, the I/O and compute bills explode, and you still risk the model serving a ten‑second‑old lie. Freshness is binary: either you guarantee it or you don’t. RAG can’t.

execute_sql is not an Interface

Recognizing the gap between static knowledge and live context, many databases have implemented some version of the same Model Context Protocol (MCP) Server. This server features a single tool called execute_sql, which allows the model to generate raw SQL queries that can be executed against the database to fetch fresh data from the source. It feels powerful, nothing is off limits. This has made it easy to get started. For ad hoc analysis, debugging, and early prototyping, it works well. But then you go to production and every prompt becomes a query hitting the OLTP database and everything slows to a crawl. Worse, every reformulation of a prompt can change the query plan, the row count, the bill, and sometimes the correctness.

You can check in the prompt, sure, but that doesn’t mean you’ve versioned the behavior. You can’t guarantee determinism. You can’t diff changes in a pull request and say “this is what will run now.” Two nearly identical inputs can yield totally different queries.

In other words, free‑form SQL violates every principle of a sane software development lifecycle. Interfaces and contracts matter. If your production stack is an unversioned string of ad-libbed SQL, you’re not shipping AI, you’re shipping vibes.

The OLTP Bottleneck and the Warehouse Dead End

The fresh data your model needs lives in the OLTP system that processes writes. Those systems are tuned for single‑row lookups, not analytic joins. Push them too hard and you hurt checkout flows. The classic workaround is to ship data to a warehouse where analytic queries fly but the ETL process can introduce minutes of lag. For dynamic experiences minutes are an eternity.

Materialize is the engine behind Operational Data Products. It runs heavy logic continually and incrementally, maintains strong consistency, and serves results with millisecond level latency. You get warehouse‑style performance with OLTP‑grade freshness. This is how your LLMs can stay within their latency budget while relying on complex business logic, without shortcuts or guesswork.

Materialize Turns Views Into Tools

Today we are closing the final mile. Every indexed view your role can access, based on existing RBAC privileges, automatically appears as a typed, callable tool over the Model Context Protocol (MCP), a standard way for models to discover and access relevant context. If you have written the view and created an index, you have already built the interface. It’s fully governed, secure, and ready for production.

How a View Becomes a Tool

A client asks “What’s my portfolio worth right now?” You can’t answer that from a vector store, or a prompt and a hope, you need positions, prices, and the firm’s valuation logic. These must be joined together, kept fresh to the second, and correct every time. That’s because portfolio value drives decision making, creates trust, and requires compliance. If it’s wrong, the system isn’t glitchy, it’s broken.

With Materialize, you express the logic once as a SQL view. The database keeps it incrementally up to date and indexed for fast access. With the new Materialize MCP Server, your LLM now has automatic access to this view as a callable tool.

sql
CREATE VIEW portfolio_context AS
WITH live_prices AS (
  SELECT symbol, price_usd, as_of
  FROM market_prices
  WHERE as_of + INTERVAL '1 minute' >= mz_now() 
),
portfolio_stats AS (
  SELECT p.client_id, p.symbol, p.shares,
         lp.price_usd * p.shares AS position_value
  FROM positions p
  JOIN live_prices lp
    ON p.symbol = lp.symbol
),
portfolio_agg AS (
  SELECT
    client_id,
    SUM(position_value) AS total_value,
    JSONB_AGG(
      JSONB_BUILD_OBJECT(
        'symbol', symbol,
        'shares', shares,
        'value', position_value,
        'price', lp.price_usd
      )
    ) AS holdings
  FROM portfolio_stats
  GROUP BY client_id
)
SELECT client_id, total_value, holdings
FROM portfolio_agg;

CREATE INDEX ON financrag_portfolio_context (client_id);
COMMENT ON VIEW financrag_portfolio_context IS
  'Given a client ID, return their live portfolio value, with real-time valuations for each holding based on the latest market prices. 
   Use this tool to generate up-to-date investment summaries or personalized trading recommendations.';

Materialize maintains the view continuously and uses the index to guarantee single-row lookup performance. Because the view is indexed and access-controlled, the Materialize MCP Server automatically advertises it as a typed tool, scoped precisely to the indexed column, shaped by the schema, and described by the comment.

json
{
  "name": "portfolio_context",
  "description": "Given a client ID, return their live portfolio value, with real-time valuations for each holding based on the latest market prices. Use this tool to generate up-to-date investment summaries or personalized trading recommendations.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "client_id": { "type": "string" }
    },
    "required": ["client_id"]
  }
}

The model calls the tool, passes in a client ID, and receives a stable, low-latency payload. The business logic lives in the view, where it can be versioned, tested, observed, and trusted.

Why This Model Works

Indexed views behave like code. You can write unit tests that exercise edge cases. You can version them in Git, roll them back, diff them, annotate them. You can observe them through metrics such as queries per second or p99 latency because they are first‑class citizens, not ephemeral strings in a prompt.

Security improves because the surface area shrinks to the tools you expose. If the view omits sensitive columns, the tool cannot leak them. Consistency improves because every call executes the same, pre‑computed plan. Performance improves because you pay the cost of the join or aggregation once, not on every request.

The result is a better user experience. Answers arrive faster, with fewer failures. You don’t need to guess what the model will do, and your users don’t need to wait, retry, or wonder if something broke. Every interface is fast, stable, and explainable. Because the logic behind it behaves like real software.

What This Means for Teams

Teams ship faster, and the dance of schema‑API‑prompt triangulation disappears. You define a view, you index it, and the interface exists. There is no middleware to write, no business logic to duplicate, and no glue code to maintain. Front‑end developers can mock it. QA can test it. LLM engineers can reference it without learning your table layout. When the schema evolves, you update the view in one place instead of chasing down three codebases.

In customer terms, this collapses friction. A retail assistant can answer inventory questions down to the fulfillment‑center shelf. A banking chatbot can access real‑time portfolio exposure. A logistics agent can route a truck based on live traffic, not the snapshot from last night’s ETL job.

The Road Ahead

Operational Data Products shift the mental model from “run this query” to “use this capability.” They let AI reason over business concepts instead of raw tables, and they guarantee the freshness and efficiency that data-intensive applications demand.

Materialize automates the heavy lifting: incremental view maintenance, index management, and now interface generation. The result is a data layer that behaves like a well‑factored codebase. Composable, testable, observable, and fast.

The ability to pull together fresh context for models no matter where it resides will define the winners of the AI era. Your models need to know what is true right now, and your engineers need a sane way to serve that truth. Operational Data Products are that way, and Materialize is the platform that makes them effortless.

Get Started with Materialize