- Open Banker
- Posts
- AI for Consumer Bankers: Concern, Hype Machine, or a Golden Opportunity? Yes
AI for Consumer Bankers: Concern, Hype Machine, or a Golden Opportunity? Yes
Written by Martin Kleinbard

Martin Kleinbard is the founder of Granular Fintech LLC, which advises fintech lenders and vendors on processes and strategies to grow and win in an ever-changing marketplace and regulatory landscape. He previously helped stand up new consumer credit products and teams at American Express and Bread Finance (acquired by Alliance Data Systems in 2020) before leading the CFPB's market research in fintech topics such as BNPL and solar financing.
Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.
It’s hard for even the most laser-focused consumer bankers to avoid the onslaught of AI-related headlines. It’s equally difficult to avoid letting those headlines cause large emotional and strategic swings. One day, it’s doomsday pessimism; the next, it’s blind optimism.
Should prudent bankers, averse to drastic extremes, maintain a bland “everything in moderation” stance? No, that also misses the mark. This is a time when negative, neutral, and positive viewpoints can all be strategically optimal. Rather than just picking a single AI take lane and sticking to it, it’s healthy to compartmentalize these sentiments by the specific use case.
When AI aggregates and manipulates consumer decision-making, it poses a serious threat to banking functions that had previously relied on a more heterogeneous and exogenous set of behaviors.
When its purveyors exaggerate the benefits of preexisting technology, the rational response is healthy skepticism toward the model hype but a double-click into the underlying data.
And in circumstances where it can transform the efficiency and effectiveness of high-cost operational processes, it should be welcomed as a powerful complement to the foundational human expertise.
Let’s take a quick look at each scenario.
Aggregated Consumer Decision-Making: Sound the Underwriting Alarm Bells
A few weeks ago, I published a paper with Fintech Takes and PrismData on the threat that AI poses to the predictiveness of legacy consumer credit scoring models such as FICO and Vantage. TL;DR:
Historically, the proliferation of score-maximizing “hacks” with minimal ties to intrinsic creditworthiness — requesting credit line increases, becoming an authorized user on someone else’s credit card, opening a “credit builder” loan designed specifically to increase one’s credit mix, and the like — was limited by human friction. A consumer who wanted to quickly hack their way from a 680 to a 720 had to take the time and energy to research and execute these methods.
AI chatbots and agents will dramatically reduce that friction by identifying and executing the steps to maximize a consumer’s score on their behalf, subject to their self-stated scoring goals and financial and temporal constraints. The models behind those agents will quickly learn that the vast majority of consumers will opt for the “lowest lift” solutions that yield the greatest possible scoring benefit with the lowest financial commitment. As those solutions become more popular, legacy scoring models will find it harder to separate true creditworthy wheat from manipulated chaff.
This herding to mass-adopted, automated, low-lift score hacks threatens the predictiveness of legacy credit scoring models, which in turn will make underwriting more expensive and loss projections less certain.
We don’t have to wait several more years to see how this may play out. A TikTok explaining how to upload one’s credit report to ChatGPT to populate and send out letters disputing all negative tradelines garnered over two million views in its first three weeks. Don’t want to go it alone? A whole host of influencer-hocked, AI-touting auto-dispute companies promise to do it for you — for a monthly fee.
The good news is that an antidote to less predictive legacy scores already exists. Cash flow underwriting, which is already predicting defaults at similar rates to FICO — will be far more resistant to agentic-driven score gamification. Lenders and regulators must invest now in cash flow attributes and models that will add critical resilience to a legacy underwriting ecosystem that is getting wobblier every year.
However, credit score gamification will not be the only scenario in which AI alters consumer behavior. Just wait until cardholders deploy AI agents to maximize their signup bonuses and promo balance transfers across issuers, to say nothing of the more nefarious use cases that sophisticated fraud networks will inevitably draw up.
The bottom line: you can no longer rely on human friction to slow down unprofitable consumer activity.
No Exaggeration: Algorithms are Gilded; New Data is Gold
The biggest area for AI-based skepticism is in underwriting — not because it doesn’t exist, but because it’s old hat. Popular modeling techniques like Gradient Boosting Machine and XGBoost that have been around for several years are a form of machine learning, which itself is a subset of artificial intelligence. These models are clearly a step up from their predecessors in predictiveness, but not necessarily in “AI-ness.” As the risk executive Kevin Moss noted in a recent podcast, the machine learning modeling technique on which GBM and XGBoost were based (CART, short for Classification And Regression Tree) was developed in the mid-1980s.
I say that not to downplay the value that these techniques can add to your unit economics. When I was a risk manager at an early stage lending startup, we implemented a GBM model and saw a pair of results that are usually impossible to get in tandem: a substantial increase in approval rates and reduction in credit losses. But this was six years ago — three years prior to the public release of ChatGPT and long before AI became the Atlas that supported the entire stock market on its shoulders.
If you’re a banker who is not currently using a best-in-class machine learning model on your proprietary data, you’ll experience a step-function improvement once you implement one. However, if you’re already using one, don’t expect another step-function change simply because an underwriting product has “AI” in its name. As Fintech Takes founder Alex Johnson has noted on various occasions, we’ve come very close to squeezing all of the predictive value out of the legacy credit bureau fruit that dominates traditional credit underwriting. If it’s using basically the same dataset to which you already have access, it’s only going to give you a few more decimal point droplets of Area Under the Curve lift.
At this point, the only way to get a lot more credit performance juice is to find another kind of fruit: a new high-impact data source that yields orthogonal predictive value on top of legacy credit reports and scores. Fortunately, the cash flow data tree is ripe, ready to pick, and bearing exactly the kind of fruit that lenders crave. In addition to helping to mitigate the effects of AI-driven credit score manipulation, it can provide a tangible profitability boost without the silly hype machine.
The point here is not to understate the technical sophistication that goes into crafting highly predictive and compliant cash flow attributes and model scores. It is instead to acknowledge that cash flow’s “secret sauce” did not suddenly spawn from a fancy-sounding deep learning algorithm but rather was the result of years of painstaking work to acquire, clean, synthesize, and fine-tune the raw data elements.
Transforming Credit Operations Optimization
Unlike underwriting, the functional areas known collectively as “operations” have seen a much slower uptake of cutting-edge tech. Change is coming. If mass computing and generalizable credit scores made the 1990s the quantum-leap decade of credit underwriting, generative AI will make the 2020s the quantum-leap decade of credit operations.
Consider customer complaints and disputes — process areas that have humbled even the richest, most sophisticated lenders. Their sheer volume and complexity push even the best systems to the brink. Most lenders don’t have enough human resources to devote to reviewing each inbound contact in a thorough and timely manner; many even rely on random sampling to get a gauge on which product features need fixing first. The logistical hurdles inherent in data that arrives in unstructured formats via several different channels (text, chatbot, email, live phone call, voicemail, third-party sites and portals) exacerbate the problem. It’s a recipe for inefficiencies at best, or compliance and reputational nightmares at worst.
Standard tabular infrastructure was always a poor fit for this kind of complex, text-based data, but that’s exactly where the newest crop of AI models excel. Sophisticated AI agents consisting of components such as large language models, long-term memory, and access to the lender’s internal tools and systems can help spin diffuse pieces of straw into gold. Use cases include:
Standardizing disparate data sources and formats into a single, digestible, computer- and human-readable database;
Labeling and categorizing contacts by type and severity; and
Determining and executing an appropriate form of response/recourse
As a former risk manager at both a large bank and an emerging fintech, I would’ve loved to have all our disparate complaints datasets organized and categorized into one single source, along with a prepopulated treatment recommendation. In addition to saving me and my colleagues a ton of time and energy, it would have granted us real-time visibility into every inbound customer contact — no more clunky random samples on a monthly or quarterly basis.
Another ops workstream that will benefit from the adoption of AI is regulatory compliance. Primary lending banks, sponsor banks, and fintechs all know the stress of adhering to differing state-by-state regulatory requirements. Think back to the mid-2010s fallout of the Madden v. Midland decision and all the resulting carveouts in terms and funding that lenders started applying based on each applicant’s billing address. Now add several additional layers of state-by-state complexity that will become the new norm in the aftermath of the federal government’s current rulemaking, supervisory, and enforcement pullback. Sprinkle on top the potential liability involved in enforcement “lookbacks” to prior years if political winds change in the future. This patchwork, constantly-in-flux compliance outlook will be a nightmare for lenders attempting to go it alone. Fortunately, AI agent reinforcements are on the way.
Conclusion
You can already find AI agents hard at work for many of the country’s leading banks and fintechs, systematically and continuously reviewing internal policies and external communications for federal and state-by-state compliance. Anything that presents a potential liability gets flagged for further review. Subject matter experts in the Ops, Risk, Engineering, and Legal/Compliance departments have the final say on whether and exactly how to adjust company policies and code based on the information provided.
That last point is not a minor footnote. AI credit ops management solutions don’t replace human decision-making; they help managers and their teams streamline the information-gathering processes so they can devote their time to refining and executing the best possible decisions. It’s a lesson we should all remember, whether we’re facing AI as a threat, an exaggeration, or an opportunity: we’re in control of how we respond to and benefit from it.
The opinions shared in this article are the author’s own and do not reflect the views of any organization they are affiliated with.
Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.
If an idea matters, you’ll find it here. If you find an idea here, it matters.
Interested in contributing to Open Banker? Send us an email at [email protected].