Neo Financial Cuts Costs by 80% with a Materialize-powered Online Feature Store
"Our fraud losses have substantially decreased, and the infrastructure spend for this system has gone down by about 80 percent."

Transform, Deliver, and Act
Book your demo today.At a Glance
Neo Financial, a digital-first bank, needed a low-latency, low-ops way to serve real-time features for decisioning workflows—starting with fraud. By streaming MongoDB data into Materialize and defining aggregates in SQL, Neo stood up an online feature store that returns sub-second lookups, costs 80% less than the previous stack, and allows their data team to roll out new real-time models in hours instead of days.
Challenge
Build an online feature store that could meet Mastercard’s 7-second authorization window (target <1 s), avoid heavy DevOps burden, and scale beyond fraud to any real-time model—without waking engineers for maintenance or inflating infrastructure spend.
Results
- 80 % lower fraud-stack cost vs. prior system
- <1 s P99 feature latency, keeping POS approvals instant
- 20x+ faster feature delivery via SQL + dbt workflow
How the popular digital bank laid the groundwork for low-latency decisioning.
Overview: A Unified Feature Layer for Every Decision
Neo Financial re-architected its data stack around an online feature store—a continuously updated set of feature tables serving any application that requires millisecond-fresh context.
A feature store is a specialized data system that transforms raw operational data into model-ready “features”—such as a user’s spend-in-last-5-minutes or signup-country—and serves them via a low-latency API at inference time. By fetching the freshest feature values in real time, models can combine them with precomputed weights to produce accurate predictions.
The main challenge with feature stores is that the freshest data lives in the operational database, but the queries to compute these aggregates can be taxing on production systems.
Neo’s first workload on Materialize powers its fraud engine, but the vision is much broader: credit adjudication, personalized offers, and any future ML model that must act on operational data in real time.
Initial Roadblocks: Fast Aggregations Without Heavy Ops
Neo’s team established clear non-negotiables for the online store:
- Sub-second feature lookups: Their fraud detection model must return a decision within their 7-second authorization window. However, just staying within that budget wasn’t good enough to meet modern customer expectations; Neo targets sub-1-second response times to keep point-of-sale interactions seamless.
- Developer velocity: New or modified features should go live in hours—not days of bespoke code and infrastructure work.
Before Materialize, all available options had serious drawbacks:
- Do-it-yourself in MongoDB: Flexible, but high-maintenance. Difficult to guarantee low latency under load.
- ClickHouse or Flink: Powerful, but carried a significant DevOps burden that Neo’s team wanted to avoid.
- Warehouse-only (Databricks/Snowflake): Ideal for batch ML, but couldn’t meet the sub-second SLA.
Why Materialize
Materialize delivered on Neo’s priorities:
- Incremental view maintenance - Materialize proactively and incrementally maintains views that represent features in real time, so when requests come in, the up-to-date answer is returned in milliseconds.
- Familiar SQL interface for developer productivity - Teams define complex aggregations using best practices for software development via dbt.
- Fully managed service - No cluster babysitting—engineers can focus on building product features.
Architecture Evolution
Neo uses a lambda architecture for offline feature work (batch layer) and online inference (speed layer). They transitioned from an inflexible, vendor-managed system to one powered by Materialize, where they could quickly create and modify features just using SQL. This was simpler, cheaper, and fast enough to power all decisioning workloads ahead. The results:
- The decision engine fetches features that correctly reflect changes with ~1s P99 of the customer transaction happening in the real world.
"It’s not unreasonable to make two purchases at Amazon in a minute. We need to make sure that we can do that aggregation as quickly as possible."
- Engineers ship new real-time features in hours—by editing SQL, rather than spending days redeploying Spark.
- They deliver new features more than 20x faster—hours instead of days—thanks to real-time pipelines built in SQL/dbt instead of TypeScript services.
- 80% cost reduction across the online feature store stack
"Our fraud losses have substantially decreased and the infrastructure spend for this system has gone down by about 80 percent."
Neo is now extending the same pattern to other parts of its architecture—such as consolidating ad hoc transformation microservices into incrementally maintained views in Materialize.
The Road Ahead
With the online feature store live and fraud use cases in production, Neo is expanding into new workloads including credit decisioning & underwriting and personalized engagement.
"For any real-time models the plan is to go through Materialize as our online feature store."
Because each feature is a SQL-defined data product and composable into other objects, they will see both compounding value when creating higher level objects and the marginal cost of launching new models approaches zero as more use cases build on shared aggregates.
"Materialize is one of the most pleasant vendors that we work with. Fast, thorough support responses make a huge difference."
Related Resources
Loading...