• Open Banker
  • Posts
  • Agentic Commerce: Is This Really the Best We Can Do?

Agentic Commerce: Is This Really the Best We Can Do?

Written by Delicia Hand

Delicia Reynolds Hand is the Senior Director of Digital Marketplace at Consumer Reports. With 20+ years at the intersection of technology, policy, and social impact, she leads initiatives evaluating fintech apps and AI's impact on financial services while developing frameworks for responsible implementation.

Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.

“Agentic AI” and “agentic commerce” dominated 2024 and 2025, the defining buzzwords, at every fintech and tech forum. What will 2026 be? Now that the hype cycle has peaked, perhaps we can think more clearly about what we’re actually building.

AI is indeed life-altering technology — a genuine inflection point in automation and consumer interaction. The question is: to what ends are we directing this innovation? 

Is shopping really the best we can do?

Because while the industry spent last year racing to develop AI agents that will automatically reorder detergent and find the perfect gift for Mom, financial services faces its most profound set of consumer protection challenges in a generation. We have a narrow window to redirect this trajectory before commercial optimization patterns become permanent infrastructure. 

The Parallel Universe Problem

We’re headed down a path where success means optimized shopping carts. Consumers save three minutes on routine purchases. Commerce friction drops another percentage point. Merchant revenue increases marginally.

At the same time, older adults lose life savings to AI-cloned voices, with global deepfake fraud losses exceeding $200 million in Q1 2025 alone.1 Financial decisions are increasingly driven by AI trained on incomplete or inaccurate data, leading to poor outcomes for creditworthy consumers and missed opportunities for institutions. Consumers lose control over their financial data as AI systems make decisions without meaningful transparency or consent. The gap between financial institutions’ technological capabilities and regulatory oversight grows exponentially wider.

We should be deploying AI to solve these problems.

Not because shopping optimization is inherently wrong, but because the choices we make about AI deployment in the next 12-18 months will determine whether this technology accelerates inequality or becomes the equalizer we desperately need.

First-mover advantage in AI isn't just about market share — it’s about establishing norms. The industry patterns we establish now, the investment priorities we set, and the technical architectures we deploy will be impossible to redirect once they're entrenched. We're choosing our default future.

The Scale of What's at Stake

Recent data highlight the growing scale of fraud harms. The FBI reports that Americans aged 60 and older lost more than $3.4 billion to scams in 2023.2 Industry analyses also indicate that deepfake-enabled fraud attempts increased by roughly 3,000 percent between 2022 and 2023 as generative AI tools became more widely available.3 Globally, the Global Anti-Scam Alliance estimates that scam losses exceeded $1 trillion over a recent 12-month period.4

Financial losses are only part of the cost. As fraudsters increasingly use AI-driven tools while institutions focus on commerce optimization, consumers' faith in digital financial systems — the foundation of fintech growth, and a source of genuine security improvements — faces systematic assault.

At the same time, consumer protection remains stuck in a familiar failure pattern: technological capability advancing faster than oversight. With Agentic AI, optimizing commerce based on flawed data while fraud persists — we risk building a future where consumers struggle rather than thrive. We'd be automating errors at machine speed. Capital doesn't reach creditworthy borrowers. Good customers are turned away. Economic opportunity is misallocated because systems can't accurately assess capability.

The challenge is that regulators only act after consumers are harmed, often at scale. By the time protections arrive, the damage is done. Preventing predictable consumer harm must be built into system design from the outset — not addressed retroactively once damage is widespread.

The Matrix We’re Building

The nightmare scenario has us building a matrix where we are passengers in our own financial lives — and cement every existing dysfunction into permanent infrastructure.

AI agents “optimize” every transaction, but you’ve lost the ability to understand — let alone challenge — that you are being steered in one direction versus another. Credit decisions happen through intermediaries that have encoded every historical bias. The systems are so complex, so black-boxed, that even if you suspect discrimination, you can't prove it. Opting out means being locked out.

We’ll do it in the name of convenience. But step by reasonable-seeming step, we build a system where human judgment, human agency, and human choice becomes vestigial. When AI agents make decisions for millions simultaneously, flaws propagate instantly. By the time we notice, the damage is done and the systems are too entrenched to change.

That’s the nightmare: Not that AI fails catastrophically, but that it succeeds in building a world where we’ve optimized away our own agency and permanently encoded our worst systemic failures into how financial life works.

An Ecology of Agentic Systems

In a healthy ecology, resources flow through multiple pathways, creating balance and resilience. In an “ecology of commerce”, the same principle applies. Complex, interdependent relationships between consumers, merchants, financial institutions, and regulators protect against fragility. An ecology of agentic systems could include:

Protective agents that monitor transactions for fraud patterns and verify recipient authenticity before funds transfer. We have these capabilities at the enterprise level — financial institutions deploy sophisticated fraud detection systems internally. Could agents be developed to deliver these capabilities locally, on device, for consumers?

Accuracy agents that identify when financial decisions are based on incomplete or outdated data, explain why applications were denied or approved, and connect consumers to redress mechanisms when errors occur.

Sovereignty agents that enforce consumer control over data collection and sharing, provide transparency about how data is used in AI decision-making, and actively detect unauthorized access or misuse. 

Loyalty agents that serve consumer interests rather than advertiser interests, disclose conflicts transparently, and submit to meaningful consumer override. This design choice matters. Recent consumer research5 shows widespread skepticism toward agentic commerce: many consumers do not see a need for autonomous shopping agents, prefer access to human assistance, and worry that AI agents would prioritize selling over serving their actual needs. Loyalty agents would embody a true fiduciary obligation in an automated context — precisely the safeguard consumers say they want, but fear current systems will not provide. 

The technology for these tools exists. The question isn’t capability — it's priority. It’s values. It’s whose interests we're serving.

The Governance Challenge

The velocity of AI development has outpaced our traditional policy institutions’ capacity to respond. Congress hasn’t passed standalone consumer financial protection legislation in years and faces deep divisions on AI governance. 

Given this reality and the speed of AI development, how do we build approaches that produce the outcomes consumers need? Who is systematically considering these questions at the pace that matches AI evolution? What meaningful accountability mechanisms can we actually build?

How about the AI themselves? Rather than waiting for legislative perfection, regulators could establish the framework for what protective AI systems must demonstrate, with enforcement through automated testing and public transparency. This isn't industry self-regulation — it’s regulators setting clear expectations that can be verified in real-time.

This isn’t about abandoning regulation. It’s about recognizing that waiting for perfect governance means ceding the field to whoever moves fastest. We need continued advocacy for robust regulation alongside immediate development of protective standards and infrastructure — parallel tracks that can inform and strengthen each other.

What We Must Build Now

Here's what needs to happen in the next 12 months:

Financial institutions must reallocate AI investment — matching every dollar spent on commerce optimization with a dollar spent on consumer protection. Launch protective agent pilots within 90 days, focusing on high-risk populations first. Build collaborative defense networks that share fraud intelligence across institution types and sizes. Give consumers real control over their data and how AI uses it. Build the trust infrastructure that is foundational — not just for agentic commerce, but for consumers’ actual needs.

Technologists must build the protective stack: open-source deepfake detection tools, real-time voice biometric verification APIs, behavioral anomaly detection frameworks, consumer-controllable personal security agents, and data sovereignty infrastructure. Design for elderly users and vulnerable populations first — building accessibility as a core requirement creates more robust systems for everyone.

Advocates must build technical capacity to understand AI architectures and develop capability to test AI systems for, among other things, bias and safety. We can lead on defining what accuracy, transparency, and accountability mean in practice — and develop agents that can assess against these standards. We can deploy AI for protection — using it to detect patterns in consumer complaints, help consumers understand and exercise their rights, and test institutional compliance. The innovation space isn't just for engineers anymore; the frameworks we articulate in — natural language — can directly inform what gets built.

The Choice We Should Make Right Now

When we look back at 2026, will we see the year we optimized shopping while fraud losses tripled, algorithmic bias deepened, and consumers lost control of their financial lives?

Or will we see the year we redirected transformative technology toward protection over profit, accuracy over automation, agency over convenience?

The technology exists to build protective agents — fraud detection on device, accuracy verification, data sovereignty tools. We’re choosing not to prioritize them. 

That choice is happening now, in what gets funded, what gets built first, what gets called “innovation.” Once these patterns are entrenched, once the infrastructure is built, redirecting becomes nearly impossible. We don’t need to abandon agentic commerce, we should build the protective infrastructure alongside it.

We can and must do better, and build both, before we run out of time to change course.

The opinions shared in this article are the author’s own and do not reflect the views of any organization they are affiliated with.

[1] Resemble AI. 2025. Q1 2025 AI Deepfake Security Report. Global documented financial damages exceeding $200 million in Q1 2025.

[2] Federal Bureau of Investigation (FBI) Internet Crime Complaint Center (2024) 2023 Elder Fraud Report. Available at: https://www.ic3.gov/AnnualReport/Reports/2023_IC3ElderFraudReport.pdf

[3] Resemble AI (2025) Q1 2025 Deepfake Incident Report.

[4]  Global Anti-Scam Alliance and Feedzai (2024) Global State of Scams Report 2024. Available at: https://www.gasa.org/post/global-state-of-scams-report-2024-1-trillion-stolen-in-12-months-gasa-feedzai

[5] HUMAN (2025) Iris Report: Perspectives on Agentic Commerce. Available at: https://www.humansecurity.com/learn/resources/iris-report-perspectives-on-agentic-commerce/

Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.

If an idea matters, you’ll find it here. If you find an idea here, it matters. 

Interested in contributing to Open Banker? Send us an email at [email protected].