Enterprise Context Engineering

Sid Sawhney
March 26, 2026

Today’s Context Engineering

Context engineering is top of mind for all enterprises building agentic applications today. Anyone working with AI today is wondering, how can I provide all the data I need to an LLM to get the most accurate response?

Right now, context engineering is in its infancy. It’s done ad-hoc by teams coming up with the initial demos and proof-of-concepts of agentic systems. There’s no established patterns on how to structure context data, and as a result we’re seeing a lot of focus on application-level strategies like file naming conventions, LLM note-taking strategies, etc. These are relatively easy-to-implement, low-hanging-fruit approaches to solving context engineering challenges, but we’re only now starting to explore context engineering design patterns for enterprise scale and production applications.

Context Engineering of the Future

Context engineering is a challenge that’s going to grow into an established domain in the AI application stack. There are two restraints today that make context engineering make-or-break for agentic application performance.

The first comes from the fundamental limitation of LLMs today, the limited context window. This is noticeable by anyone who prompts LLMs in a long running chat window. Once you give too much information to an LLM, the LLM forgets details and gives inaccurate results. This is a known problem that’s a focus of AI research today, but no good solution yet. Anthropic’s Effective Context for AI Agents blog describes it well:

This attention scarcity stems from architectural constraints of LLMs. LLMs are based on the transformer architecture, which enables every token to attend to every other token across the entire context. This results in n² pairwise relationships for n tokens.

As its context length increases, a model's ability to capture these pairwise relationships gets stretched thin, creating a natural tension between context size and attention focus.

With agentic applications, it’s not possible to control and provide all the context that each agent needs. The promise of agentic applications are bigger, to solve larger problems that are more loosely defined. Capturing the context required for larger agentic applications makes the architectural constraints of LLM context even more pronounced. Before we were writing paragraphs of backgrounds into the prompts for LLMs as context, now agents need knowledge of data systems, outputs of previous agent calls, and more. LLM context windows haven’t grown at the same rate as our expectations of the problems solve-able by agents.

The second challenge comes from an technology industry wide shortage in memory chips. Cloud providers are buying out memory chip production years in advance in response to rising AI demand. This is shifting memory chip production away from general purpose memory, DRAM, used by servers and towards HBM memory, specially designed and memory-intensive chips coupled with GPUs. This is going to drive up the cost for servers, databases, and infrastructure in general for data platform teams.

IEEE spectrum reports there is already an 90% increase in memory costs in the past year, and a further 70% increase this year. With projected further increases in AI demand and significant supply expansions only occurring in a couple years when new fab plants come online, memory cost for cloud infrastructure will be a significant line item for all technology enterprises for the years to come.

A Data Platform Team Problem

These architectural and cost constraints are going to push the context engineering challenges to data platform teams. Enterprises have to begin capitalizing on the promise of agentic AI by augmenting the essential functions of their business. These systems will need to use operational data from businesses for context.

The significant increases in memory costs are going to lead to cost-conscious decisions about how to store and transform data, broadly leading to centralized data platform teams and making larger enterprise patterns like the data mesh more attractive.

Core Tenets of a Context System

As Data Platform teams solve context engineering and build out production-grade systems to deliver context to AI applications, four core tenets will emerge to maximize the performance of agentic systems and solve the architectural and cost constraints of context.

  1. Context systems will transform existing business data and distill it into core semantic definitions for the limited context windows of agents.
  2. Context systems will have to be cost effective, re-using existing data products and leveraging incremental computation when possible.
  3. Context systems have to be low-latency, serving data in less than a second to serve agent-scale applications.
  4. Context systems have to have correct data at all times, else misinformation will propagate and application bugs will be hard to debug among large-scale multi-agent applications.

How Materialize powers Enterprise Context Systems

Materialize powers the context engineering systems of the future. Materialize connects directly to your data sources like OLTP databases, Kafka, and more to pull and transform the operational data that your agentic applications needs as context. Built on Timely and Differential dataflow technologies, Materialize uses incremental computation to build a live data layer for apps and agents.

Enterprises use Materialize to build an operational data mesh. Materialize helps data platform teams create core semantic objects of the business which are up-to-date and represent the live state of the business. Applications built on top of the Materialize data mesh have access to these shared, re-usable, and live data objects.

The Materialize operational data mesh addresses the core tenets outlined above for the context systems of the future that will serve agentic applications.

  1. Materialize enables teams to transform operational data into the distilled and essential context agentic applications need, using familiar SQL.
  2. Materialize is cost effective. Materialize uses incremental computation to keep the core semantic objects of the business up-to-date to the second. These data products can then be shared across all applications needing this data, promoting re-use and cost savings.
  3. Materialize data products are created as live materialized views that serve data in milliseconds, meeting the sub-second performance requirements of real-time agentic systems at scale.
  4. Materialize provides strictly serializable consistency guarantees, ensures agents always receive accurate, up-to-date context. Materialize respects the upstream transaction boundaries of OLTP source databases like Postgres and MySQL, so that your agents don’t read inconsistent data.

Production Context Engineering Case Study

Day AI, an AI-native CRM startup, uses Materialize today as the live context layer to serve CRM data to their application and to the agentic workflows their customers use. Agents collect data to record in the CRM, Materialize transforms the raw data into clean, correct properties of CRM objects. The transformed data keeps their search index up-to-date which their agents use to query for context.

The live context layer addresses the two largest problems with context engineering today. The up-to-date search index allows agents to query as needed for correct, fresh data to do more with the limited context window LLMs have. And the context layer is cost effective: maintaining fresh and correct data is much cheaper in Materialize, than in the source databases where costly transformations would have to run, recomputing results frequently.

Materialize enabled a small team to build what would traditionally require dozens of engineers. As Day AI's Founding Engineer Erik Munson put it: "AI has put massive amounts of raw truth in play that we couldn't work with before. Materialize gives us a flexible platform for turning that into live context, in a way that matches how an agent would want to read it.

Let’s Get Started Together

Read more about the Day AI case study here

Sid

Sid Sawhney

Field Engineer, Materialize

Sid has been in the Database and Cloud Infrastructure space for the past 6 years. He was previously at StarTree and Amazon Web Services. Sid holds a B.S.E in Computer Science from University of Michigan