• Open Banker
  • Posts
  • Freeing Financial Advice from Financial Advisors

Freeing Financial Advice from Financial Advisors

Written by Amias Gerety

Amias Moore Gerety is a partner at QED Investors, a leading global fintech venture firm. He previously served as the Acting Assistant Secretary of the Treasury for Financial Institutions and was a key architect of the post-crisis financial reforms. 

In 2018, my wife and I decided to keep our growing family in Washington, DC, which required adding space to our row house.  Since we weren’t sitting on a pile of ready cash, we needed to find the best way to finance this expenditure.  Luckily for us, we had good credit and good salaries, so we had options: a margin loan against our savings in the stock market;  using the equity in our home for a cash-out refinance or a home-equity line of credit (HELOC).   What we didn’t have was someone to help us pick the right option for us. There were lots of shopping sites online but no one who could confidently help us compare the all-in costs and benefits of each option.  Theoretically, no one should have been in a better position to make this decision – a former Treasury official and venture capital partner married to a former tax attorney. We dutifully started to build our own spreadsheets.  Fixed vs. floating rate?  Borrowing capacity?  Closing costs? How much friction?   

At this moment a “financial advisor” from our life insurance company called to ask whether we wanted to do an annual “financial check up.”  Naturally, we scheduled the call – we actually needed financial advice!  Fifteen minutes in, we had our answer.  He had no idea, he really could only give us advice on how to buy more life insurance.  

A Human Problem

Finance is complicated.  The underlying problem is that financial advice requires time, engagement, and expertise, so as a society we’ve found that the best way we’ve found to get decent advice into the hands of financial consumers is to get it from another person. But that choice carries two consequences with it: conflicts of interest and scarcity.

Most financial “advice” is actually salesmanship and structurally conflicted; therefore, most regulation around financial advice is centered on managing those conflicts.  For example, just look at how hard the investment industry fought the Department of Labor’s fiduciary rule, which would require investment advisory to only recommend investments that are in the best interest of retirees, not the investment advisors themselves.  

In fact, the term financial advisor isn’t a term with a single definition under U.S. law.  That’s what allowed my insurance agent to call himself a financial advisor.  The “gold standard” for financial advice is from certified financial planners, who are individuals that are trained, tested and certified by the Certified Financial Planning Board.  They often are paid on an hourly basis, and even if they do act as sales people they are required by their professional certification to act in the client’s best interest.   Thinking about these costs in terms of percent of assets helps frame the expense.  A typical investment advisor, who is also required to act in their clients best interest, might get paid 1% of assets per year.   For a million dollar account that would be $10k of fees, but for a household with $50k of savings, that would be only $500 – effectively ‘buying’ just an hour or two of a professional’s time.  

Scarcity, therefore, is a more complex problem. In economic terms, financial advice is subject to Baumol’s cost disease[1]  – getting increasingly expensive in real terms, because it is delivered one on one by human beings. The result is that most people don’t get quality financial advice – or professional financial advice of any kind. Only slightly more people say that they get advice from a financial advisor (19-28%) than say that they get no financial advice (14-21%).  No wonder, when unbiased advice from a certified financial planner typically costs thousands of dollars per year. This problem is likely to get worse, despite projections of growth in the number of advisors,  nearly 40% of financial advisors planning to retire in the next decade, and the needs of consumers are exploding both in terms of intensity and complexity[2].  Of course, those with the most resources are able to pay for an unbiased financial planner, while others are relegated to salespeople and TikTok ‘finfluencers.’ 

The lack of high quality, unbiased, and accessible financial advice is a real problem.  Scams are more likely to be successful with lower levels of financial literacy. Lower income consumers spend nearly three times as much on interest and fees as higher income households.  Research on financial stress and mental health suggests that the mental health outcomes can be alleviated with more accessible financial health.

The most important step on the path to a solution is to admit you have a problem.  As policy makers, we have to start by acknowledging that the system we have in place today makes financial advice too costly, too conflicted and too scarce. 

Tantalizing Power of AI Knowledge

The power of generative AI models offers a potential way out of this bind, but to do so will require us to forge new policies and new policy frameworks – focused on objective standards, not simply process obligations or certifications.  

It may go without saying, but the power of the current crop of generative AI models is spectacular.  Their sheer facility with topics and dialogue is breathtaking.  Even right out of the box, they offer cogent explanations of calculus (try it with your high schooler) and can offer advice on zip codes that have both affordable housing and high quality school districts.  

None of this is to wave away the problems TODAY with letting ChatGPT tell people what to do with their financial lives.  LLMs remain rife with hallucinations and challenges getting them to follow constraints. If you’ve spent enough time with them, you can see a bundle of contradictions – infinite knowledge and almost no judgment (which is why they’ve had trouble counting the Rs in strawberry).  The most apt description is that they are “bullshitters”, whose processes simply prioritize what sounds right, rather than some sense of what might be true. 

That said, one thing I feel when interacting with these systems is that they are infinitely patient, capable of generating answers at whatever level of sophistication I ask.  Unlike an FAQ or a wikipedia page, you never get to the end of a thread with one unanswered question in your mind.  

We know that every person will describe their own lives in unique terms, that people need the ability to ask and re-ask questions until they feel like they can understand.  We also know that talking about money fills many people with shame and fear, leading to stressed responses or avoidance. This makes AI a compelling direction to seek solutions to the current problems in financial advice.

After all, a great financial advisor will be patient, thoughtful, a good explainer – someone who can articulate big tradeoffs in simple terms but also explain how a financial product could perform in edge cases.  Moreover, a great financial advisor will be able to discuss whatever financial question clients have while also viewing the broader picture of larger financial goals.  They will gather as much information as they can – including basic financial facts, risk appetite and plans, and assemble investment strategies or other financial instruments that will maximize outcomes for the client.  

If you squint, that’s close to a working definition of an algorithm. Which is why we have had partial solutions in the form of  robo-advisors helping investors through an automated system at scale for a decade. And the fiduciary standard, while high in a moral sense, is not viewed as a complex standard for purposes of investing advice. Robo-advisors demonstrate that it can be set as a rule for an algorithm. 

Using AI could also enable an exceptionally powerful tool to answer the question most people have when they seek advice – what is the best financial choice for me?  This “tell me what to do” model is actually one of the best ways to provide high quality outcomes for individuals.  Countless studies in behavioral economics demonstrate that automated actions, and defaults enable better financial outcomes[3].  Moreover, there is ample literature in financial health that shows inconsistent and small impact from financial literacy, whereas one of the key best practices in the field is to provide advice “connected to an upcoming decision that matters to them, at the time when they can put it to use, with concrete steps they can follow.”


The Best We Could Do

Financial advice exists within the muddle of real world outcomes.  What if you saved diligently, but the stock market still crashed just before retirement?  What if I bought term life insurance but I died the day after the policy expired or I wanted to stop paying a decade before my death?

Largely because of the complexity of people’s financial lives and the inherent difficulty of predicting the future, all regulation of financial advice rests on the deep structure of the reasonable person theory.  

The reasonable person theory exists to transform the complicated nature of the real world into the procedural clarity of law, regulation and court cases.  It asks a simple question: when something goes wrong, is it “reasonable” to conclude that the actors should have foreseen or acted to prevent bad outcomes?[4] 

For a long time, humans were the only technology around capable of giving advice, and the reasonable person theory was the best foundation for certification and licensure regimes to support good outcomes and provide accountability in unfortunate situations. But that legal foundation always assumes a human at the center.  It clearly isn’t equipped to certify an AI, which would ace any certification exams but without an understanding of the associated moral code.  Nor can existing regimes help in governing the use of AI and take action when things go wrong. 

“Both” Isn’t the Answer

From a regulatory perspective, the most obvious answer to the problem of AI-financial advice is to follow the “co-pilot” model.  Like in other domains, we don’t really think that AI is capable of delivering the goods on its own, so the plan is to get efficiency by assigning an LLM driven co-pilot next to our financial services professionals.  This is the model that is embedded in the usage policies of OpenAI, Anthropic, and Google, which highlight finance and health as high risk use cases that should be implemented with human oversight.  

These co-pilots are the easy answer because they evade the question of certification – leaving AI in the simple role of ‘tool’ just as professionals use other software and planning tools.  The actual responsibility, both legally and morally, remains in the hands of the reasonable person providing the advice.  

But humans are not scalable in this way, and the problem of both quality and availability of financial advice can’t be solved with these ‘bionic’ arrangements. These arrangements will leave us in the same fundamental economic model we have today. 

Nor can co-pilot models substitute for new policy frameworks.  In the best case, co-pilots will only incrementally increase access and affordability.  In the worst, assembly lines of licensed humans will rubber stamp the outputs of an AI-machine – obfuscating accountability without surmounting the inevitable problems with AI.  

What Should We Do?

The U.S. regulatory opening provides a window for AI-financial advice to exist, but only if it is not connected to the sale of financial products.  The good news is that this opportunity means that AI-based financial advice can and should be provided under new policy and ethical frameworks that could be better than the traditional certification and licensure regimes we have today.  

The argument here is that we must understand the possibilities of LLM-provided financial advice, and the reality that current policy frameworks won’t enable the powerful expansion of access that they could promise. While these LLMs are expensive in computational terms, they are extremely cheap when compared to a human being charging hundreds of dollars an hour.  Like me, human beings want integrated financial advice, each financial decision is connected to a dozen others. Properly designed, an AI-driven financial advisor could not just be cheap, but near infinitely broad. 

To meet this opportunity, policy makers and designers of AI-based financial advice should adopt the same principle of a fiduciary interest.  While it might seem odd to apply the oldest standards to the ‘black box’ of an AI system, the principles used to train a model are often incredibly simple.  For example, though the algorithms that drive social media recommendations are complex, it is well-understood that they are optimized for the simple principle of capturing our attention. Even for OpenAI, the principle can be stated as simply as predicting the next best word. 

Moreover, disclosure and auditability of these training principles would be technologically simple to demonstrate, since they are created in human terms prior to the training and apart from the working of the model itself.  The natural tendency for companies would be to tune this optimization on the principle of “selling more products” or “sell products that make the most money for us.”  Those types of training principles could be simply prohibited.  Done properly, therefore, these systems could be less vulnerable to thesis creep, motivated reasoning, or recommending ‘favorites’ that plague almost all human financial advisors.   

Another key principle for allowing AI to provide financial advice must take advantage of the fact that financial products do have concrete mathematical dynamics. We could demand that engines of financial advice are structured to connect the underlying math with the language generating capabilities of an LLM.  In today’s world, the average salesperson of a financial product “the advisor” almost never has the capacity or even the access to the financial model of the product they’re offering or designing a plan around.  The ability to calculate specific instances for the customer themselves and explore an infinite number of personalized scenarios is a feature not a bug. In fact, policy makers should explore ways to make this a requirement.  

Finally, and most importantly, this future requires us to recognize that human fallibility and motivated reasoning are structurally accepted under our current regulatory approach.  Certification, training, and licensure are imperfect crutches to govern a financial advice industry that is underperforming its potential today.  The challenge before us is that machines are NOT human, they don’t have ethical boundaries or moral compasses that we can reasonably rely on.  Therefore, in order to unlock the promise of AI-driven financial advice we must move away from simple ethical commitments and towards more concrete definitions of the outcomes that we would like customers to achieve.

The opinions shared in this article are the author’s own and do not reflect the views of any organization they are affiliated with.

[1]  Baumol, W. J.; Bowen, W. G. (1965). "On the Performing Arts: The Anatomy of Their Economic Problems". The American Economic Review. 55 (1/2): 495–502.  https://www.jstor.org/stable/1816292  Using the example of live music, Baumol and Bowen observe that as the broader economy gets more productive any area that does not match those productivity gains will become relatively expensive.

[2]  Consumers are feeling overwhelmed by their options while also saying that their financial situation worsened in 2023. Consumers, who are worse off than ever before, based on reports from Credit Karma and Accenture.

[3] Table 1 in this TIAA research note provides a long list of behavioral findings about how our human biases prevent optimal actions.   

[4] Many sources (i.e. Wikipedia) cite an 1837 tort case Vaughan v. Menlove, or a 1903 case called the Clapham Omnibus, referring to a normal bystander on a common commuting line, but more recent scholarship pushes the original usage all the way back to 1703.  Beyond this, In fact, it’s tentacles stretch deep into almost every common sense legal standard, like a jury’s instructions around “reasonable doubt” or even the application of jury trials at all.   

Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.

If an idea matters, you’ll find it here. If you find an idea here, it matters. 

Interested in contributing to Open Banker? Send us an email at [email protected].