Beyond the numbers: The 'on-rails' approach: How to build regulated AI that actually gives financial advice

25.11.25
Words by Lee Brooks
Introduction       Can AI give safe financial advice? Multiply CTO Mike Curtis explains our "on-rails" agentic architecture that combines LLM empathy with deterministic compliance.
25.11.25
Words by Lee Brooks

In our last post, we explored the "why" of human-centric AI. Now, let's get to the "how."

For a technical audience, this is the real challenge. It's easy to build a demo that looks impressive. It's incredibly hard to build a system that can give actual financial advice without a compliance officer having a panic attack.

Our co-founder and CTO, Mike Curtis, demonstrated this perfectly. He asked a generic LLM a simple question: "Can you recommend me the best investment account provider?"

The response was a compliance disaster:

  • It recommended a specific company (a regulated act of advice).
  • It made a factual claim of "lowest fees," which is almost certainly wrong and depends on individual circumstances.
  • It failed the first test of any good adviser: it never asked, "Should this person even be investing right now?" or "Do you have any credit card debt?"

This is the core problem: LLMs are probabilistic, non-deterministic, and "hallucinate." The financial industry demands auditable, deterministic, and provably correct advice.

You can't just plug one into the other.

The Multiply solution: Agents, not oracles

At Multiply, we've spent 10 years building a robust, deterministic advice engine. Our approach isn't to replace this with an LLM. It's to supercharge it with an LLM.

We don't let the LLM be the "adviser." We use it as a world-class, empathetic "front door" that guides a user to our deterministic core.

Mike calls this the "on-rails" or agentic approach. Here's how it works:

  1. LLM as interpreter (intent classification) A user doesn't say, "I'd like to initiate the 'Inheritance' advice journey." They say, "My nan passed away, and I have some questions about inheritance." The LLM is brilliant at understanding this natural language and mapping it to a specific, pre-defined, and compliant advice journey.
  2. LLM as fact-finder (gap analysis) Once the "Invest Money" journey is triggered, our system knows it needs two facts: an amount_to_invest and an attitude_to_risk. The system checks the user's profile and sees it's missing the amount. The LLM's next job is to simply ask, "How much are you looking to invest?"
  3. LLM as data entry (fact extraction) The user replies, "I don't know, about 10k I guess." The LLM's power is to parse this unstructured, human-like text and return a structured JSON object: {"amount": 10000}.
  4. Deterministic engine as adviser (the "brain") This is the most important step. The LLM's job is done for a moment. The structured data ({"amount": 10000, "risk_profile": "medium"}) is sent to Multiply's core advice engine. This is the 10-year-old, fully-vetted, deterministic system that runs the calculations and generates the actual, auditable recommendation and suitability letter.

The LLM never gives the advice. It just assembles the ingredients. Our advice engine does the cooking.

Best of both worlds

The final step? The structured, compliant advice is passed back to the LLM, whose final job is to present it in a natural, easy-to-understand way.

This hybrid, agentic system gives you the best of both worlds:

  • The human-like experience of a natural, conversational LLM.
  • The safety and control of a deterministic, auditable advice engine.

For organisations with the highest compliance needs, we can even flip a switch. As Mike showed, we can turn on a "fully deterministic" toggle that forces the system to only use pre-approved, templated responses, removing the LLM from the user-facing text generation entirely.

This is how you build AI for a regulated world. It's not about one magic model; it's about smart, controllable, agentic systems design.

In our final post, we'll share insights from our panel with Google, Monzo, and Lloyds on what this all means for the real world - from surprising customer reactions to the "trust iceberg" every developer must navigate.

Ready to deliver scalable advice with less admin?

Multiply is helping transform the way top firms give financial advice. To join them, visit our contact page to get in touch.