• Open Banker
  • Posts
  • A New Era in Compliance – And a New Role for Regulators

A New Era in Compliance – And a New Role for Regulators

Written by Shruti Batra and Amias Gerety

Shruti Batra is a Principal at QED Investors, where she focuses on early-stage fintech. Prior to venture, she helped build Robinhood’s strategic finance team and worked in private equity and investment banking.

Amias Moore Gerety is a Partner at QED Investors, a leading global fintech venture firm. He previously served as the Acting Assistant Secretary of the Treasury for Financial Institutions and was a key architect of the post-crisis financial reforms. 

Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.

AI-based automation has opened up a new vista for startups, who are now moving beyond software spend to entire categories of operational labor. In financial services, the prize is nearly $60 billion1 annually on financial crimes compliance. But what’s often overlooked is the question regulators care most about: is the system actually identifying and mitigating risk?

Tooling Isn’t Trust

Since the ChatGPT moment in late 2023, we’ve been spending time deep in the world of AI compliance, meeting founders, reviewing products, and speaking with regulators. While the bar for AI is high, even well-staffed human teams often miss risks – not from lack of data, but lack of timely insight – making AI well-suited to surface complex issues earlier and more consistently. AI is already changing how compliance teams operate, but many startups are still focused on the surface layer of automation. Regulators have made it clear:  they’re open to automation, even in high-stakes domains like transaction monitoring and customer onboarding. But what they’re looking for isn’t faster paperwork – it’s sound judgment. Today’s AI tools are excellent at generating artifacts: SARs, audit logs, policy mappings, etc. What they can’t always do is defend why a decision was made, or prove that risk was considered the way a regulator would expect a human to.2 And that’s the bar.

The next generation of winners in AI compliance won’t just triage alerts more efficiently. They will build systems that credibly replicate decision-making, not just automate documentation. Some early signs of this shift are emerging – explainable architectures, transparent escalation logic, and policy-linked outputs. But few have met the full bar regulators expect. Bridging the last mile between assistive tooling and trusted infrastructure is where the next wave of innovation must go.

Regulators are Open to AI, but with Guardrails

Regulators in the U.S. and abroad have all signaled openness to AI in areas like customer onboarding, transaction monitoring, and KYC as long as key expectations are met:

  • The system must be explainable

  • The decision path must be auditable

  • A human must retain accountability, especially in high-risk areas

The EU AI Act, for example, categorizes AML transaction monitoring as “high risk,” requiring transparency and oversight. U.S. agencies have similarly indicated that AI is acceptable if it augments, not replaces, human decision-making.

Some AI compliance platforms are beginning to prioritize traceability and decision rationale, but most still fall short of the rigor regulators expect in high-risk domains. For one, reasoning is often not reproducible. Many systems built on large language models (LLMs) generate different outputs for the same inputs, even with tuning, making consistent decision-making and auditability difficult to guarantee.

Decision logic also remains opaque. What’s often labeled ‘explainability’ tends to be a surface-level rationale (‘we flagged this because X occurred’) rather than a structured, traceable path grounded in documented internal policies. Regulators aren’t looking for probabilistic justifications – they expect a clear link to governance frameworks.

Model governance itself is still shallow. Versioning, policy-change tracking, and lineage documentation are rare, which makes it hard for compliance teams to prove how models evolve or whether they continue to align with current regulatory expectations. The fact that “prompts” are actually core to the value of these systems is a blessing and a curse. These prompts are easier to review, but they’re also easier to change – less reliant on the traditional change management and model validation teams that have built up capacities to document their iterations.

Moreover, AI-assisted judgments risk automation-induced complacency, where the human relies on the AI, but the AI-system designer is relying on the human to keep vigilant. In most cases, systems flag anomalies or suggest actions, but human reviewers are still responsible for interpreting risk, applying precedent, and escalating decisions. AI may be assisting judgment, but it isn’t yet encoding it. Like watching someone in a Tesla on the highway, the car assumes the driver is paying attention, but the self-driving is powerful enough that the human feels comfortable staring at their phone.  

These aren’t technical oversights – they reflect deeper gaps in system design. Regulators don’t just want to see the outcome of a decision. They want to understand how it was made, whether it was consistent, and how it would stand up to scrutiny. Until AI systems can answer those questions by design, they’ll remain supplementary tools, not trusted infrastructure.

What Would Regulator-Ready AI Systems Look Like?

If we want AI compliance tools to be trusted by regulators and ultimately used by them, we need to build for that trust from the ground up. That means designing systems that don’t just produce outputs, but expose their inner workings. A regulator-ready platform would offer transparent, auditable decision trees, role-based accountability trails, and policy-based logic that maps directly to internal controls – not just black-box inference from an LLM. That includes codifying regulatory frameworks into structured, machine-readable rules, so systems aren’t just learning patterns but applying policies with traceable logic. The result should be consistent outcomes under identical inputs, with structured human override and escalation pathways when needed.

This is less about UI polish and more about foundational architecture. The goal is to replicate how compliance officers think, not just how they document, and to give regulators visibility into that process.

What Should Regulators Do?

Jo Ann Barefoot put it best: “Regulators need more than new tools. They need a fundamental redesign of how they work, to keep pace with technology change in the industry they oversee.”3 Regulators should adopt digital, AI-driven tools to supervise in real time and at scale. But today, most supervisory frameworks are still built for a paper-based past. To close the gap between innovation and oversight, regulators can’t simply react to AI – they need to shape how it’s built and deployed from the start.

That means going beyond issuing principles and enforcement memos. It means engaging directly with the technology – testing it, using it, and defining what trustworthy AI looks like in practice.

First, regulators should launch AI model validation pilots: secure environments where compliance vendors can test their systems on anonymized supervisory data. These pilots shouldn’t just evaluate accuracy, but explainability, reproducibility, and alignment with policy intent.

Second, regulators should set standards for and accept AI-generated materials – SARs, onboarding decisions, audit trails – in exams. This will need to be iterative, but the benefits to both sides will only be achieved if we target full automation – rather than insisting a human is in the loop on every decision.  Treating them as exam-grade submissions will give vendors a clear bar to meet and signal that automation is permissible, not prohibited, if it’s done right. For example, clarifying that automated results are permitted should be paired with clear disclosures on when those results were generated by AI, AI-assisted, or prepared by humans. This visibility will enable learning, but only if the data is clear.  

Third, regulators should adopt these tools internally. Letting examiners pilot regtech platforms for tasks like parsing license filings or monitoring third-party risk isn’t just about productivity – it’s about building firsthand understanding of how these systems work and where they fall short. To be clear, the goal should eventually be that in some cases there isn’t a human in the loop at the business or at the regulator overseeing that business – that is where the true efficiency win happens.

Finally, regulators should publish clear standards for “regulator-trustworthy” AI – defining expectations for governance, transparency, human override, and ongoing validation in high-risk workflows.

As FinRegLab has emphasized, “AI systems can be opaque and their decisions difficult to understand. How can we build transparency into the decisions that AI makes?”4 Explainability and reproducibility remain critical barriers to supervisory trust. But regulators are in a unique position to lower those barriers – not by lowering the bar, but by helping the industry rise to meet it.

This isn’t just about improving oversight. It’s about co-designing the compliance infrastructure of the future – one that is built for speed, scale, and scrutiny.   

1. Launch AI model validation pilots

2. Set standards for and accept AI-generated materials

3. Adopt these tools internally

4. Publish clear standards for “regulator-trustworthy” AI

AI Can’t Replace Judgment, but It Can Encode It

Compliance has never just been about documentation – but for a variety of reasons, regulators have been sliding towards the proceduralization of supervision. It has become easier for both sides to assume that poor process equals high risk – and vice versa – with enforcement almost always focused on “procedural violations.”5

But in the age of AI, we can move from sampling to universe testing, from dog-and-pony show exams to streaming data and quantitative assessments of risk. The policy intent of compliance is to demonstrate sound judgment, applied consistently and aligned with regulatory intent. AI can’t replace that judgment. But it can encode it.

The real promise of AI in compliance isn’t just speed. It’s structure.

We now have the opportunity to turn policies into code, to design systems that reason as compliance officers do – not just generate outputs, but justify them.

That’s what regulators are asking for: not tools that guess well, but systems that can explain how and why a decision was made.

Getting there will require more than better models. It will require deeper collaboration between builders and regulators – to define what good looks like, to validate it in the field, and to raise the standard together. If we do, we won’t just modernize compliance. We’ll build the kind of infrastructure that earns and sustains trust.

The opinions shared in this article are the author’s own and do not reflect the views of any organization they are affiliated with.

[2] Of course, emerging research on just how often humans make instinctive judgments then create reasons for those judgments after making them – known as post hoc rationalization – should probably give regulators pause on whether a human’s explanation for why a decision was made is accurate and reliable data.

[5]  Amias Moore Gerety , Lev Menand, “10: The Rise of RegTech and the Divergence of Compliance and Risk Open Access,” Global Fintech: Financial Innovation in the Connected World, edited by David Shrier and Alexander Pentland, MIT Press, 2022.  https://doi.org/10.7551/mitpress/13673.003.0015

Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.

If an idea matters, you’ll find it here. If you find an idea here, it matters. 

Interested in contributing to Open Banker? Send us an email at [email protected].