**How the popular digital bank laid the groundwork for low-latency decisioning.
Overview: A Unified Feature Layer for Every Decision
Neo Financial re-architected its data stack around an online feature store—a continuously updated set of feature tables serving any application that requires millisecond-fresh context.
A feature store is a specialized data system that transforms raw operational data into model-ready “features”—such as a user’s spend-in-last-5-minutes or signup-country—and serves them via a low-latency API at inference time. By fetching the freshest feature values in real time, models can combine them with precomputed weights to produce accurate predictions.
The main challenge with feature stores is that the freshest data lives in the operational database, but the queries to compute these aggregates can be taxing on production systems.
Neo’s first workload on Materialize powers its fraud engine, but the vision is much broader: credit adjudication, personalized offers, and any future ML model that must act on operational data in real time.
Initial Roadblocks: Fast Aggregations Without Heavy Ops
Neo’s team established clear non-negotiables for the online store:
- Sub-second feature lookups: Their fraud detection model must return a decision within their 7-second authorization window. However, just staying within that budget wasn’t good enough to meet modern customer expectations; Neo targets sub-1-second response times to keep point-of-sale interactions seamless.
- Developer velocity: New or modified features should go live in hours—not days of bespoke code and infrastructure work.
Before Materialize, all available options had serious drawbacks:
- Do-it-yourself in MongoDB: Flexible, but high-maintenance. Difficult to guarantee low latency under load.
- ClickHouse or Flink: Powerful, but carried a significant DevOps burden that Neo’s team wanted to avoid.
- Warehouse-only (Databricks/Snowflake): Ideal for batch ML, but couldn’t meet the sub-second SLA.
Why Materialize
Materialize delivered on Neo’s priorities:
- Incremental view maintenance - Materialize proactively and incrementally maintains views that represent features in real time, so when requests come in, the up-to-date answer is returned in milliseconds.
- Familiar SQL interface for developer productivity - Teams define complex aggregations using best practices for software development via dbt.
- Fully managed service - No cluster babysitting—engineers can focus on building product features.
Architecture Evolution
Neo uses a lambda architecture for offline feature work (batch layer) and online inference (speed layer). They transitioned from an inflexible, vendor-managed system to one powered by Materialize, where they could quickly create and modify features just using SQL. This was simpler, cheaper, and fast enough to power all decisioning workloads ahead. The results:
- The decision engine fetches features that correctly reflect changes with ~1s P99 of the customer transaction happening in the real world.
- Engineers ship new real-time features in hours—by editing SQL, rather than spending days redeploying Spark.
- They deliver new features more than 20x faster—hours instead of days—thanks to real-time pipelines built in SQL/dbt instead of TypeScript services.
- 80% cost reduction across the online feature store stack
Neo is now extending the same pattern to other parts of its architecture—such as consolidating ad hoc transformation microservices into incrementally maintained views in Materialize.
The Road Ahead
With the online feature store live and fraud use cases in production, Neo is expanding into new workloads including credit decisioning & underwriting and personalized engagement.
Because each feature is a SQL-defined data product and composable into other objects, they will see both compounding value when creating higher level objects and the marginal cost of launching new models approaches zero as more use cases build on shared aggregates.