- Open Banker
- Posts
- The Coming Wave of State AI Regulations on Financial Services
The Coming Wave of State AI Regulations on Financial Services
Written by Pat Utz

Pat Utz is CEO & Co-Founder of Abstract. He founded Abstract alongside Matthew Chang (COO) & Mohammed Hayat (CTO) in 2020 and has since grown into a VC-backed AI company with teams in New York and Los Angeles.
As a first-generation American, Pat is passionate about increasing equitability by making complex datasets accessible to the general public. Much of this fascination comes from his parents, who immigrated to the United States after having dabbled in Argentine politics.
Pat holds a B.S. degree in Engineering & Electrical Engineering from LMU. He was named to the 2023 Forbes 30 Under 30, LA for being at the forefront of AI-powered disruption in GovTech.
Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.
Right out of the gate, President Donald Trump upended Washington D.C. with 142 executive orders in the first 100 days of his new administration. And two of the executive orders (EO) -“Removing Barriers to American Leadership in Artificial Intelligence” and Strengthening American Leadership in Digital Financial Technology - aim to reframe the United States’ stance on artificial intelligence and crypto currency, hotly contested issues for financial services stakeholders. Surprisingly, with the AI EO, the administration is broadcasting it is not interested in supporting any federal AI regulations even in the face of the European Parliament passing the EU Artificial Intelligence Act (EU AI Act in 2024).
While developing federal AI policy has paused, states are actively pursuing their own policies. According to the Business Software Alliance, 700 legislative proposals on AI were considered in 2024 compared to 191 proposals in 2023. Proposed and recently enacted AI-related legislation has come from 45 states, Puerto Rico, the Virgin Islands and Washington, D.C. In California alone, 284 bills that referenced both AI and finance were proposed between January 1, 2023, and October 23, 2024, according to Abstract, an AI platform for proactive regulatory risk analysis . (Full disclosure, I am a co-founder and the CEO of Abstract.) Given the pace of this legislative activity — and the lessons learned from the lack of early regulation around social media — it’s clear that AI regulation isn’t just on the horizon; it’s a necessary and inevitable step to prevent a “wild west” scenario for states adopting AI or serving as a home to businesses looking to leverage it.
MORE DATA MEANS MORE REGULATIONS
The surge of new state AI legislative proposals coincides with the rapid adoption of AI use. Two years after Chat GPT’s debut, Generative AI is already being used by 39.4% of the U.S. population according to “The Rapid Adoption of Generative AI.” This report notes that usage and adoption of Generative AI has been “faster than adoption of the personal computer and the internet. Notably, the report found that Finance/Information/Real Estate outpaced all industry groups in AI usage at 51.2% which makes sense since the banking and financial sector is generating an enormous volume of data daily, driven by the surge in digital transactions.
The growth in data is happening in parallel to the growth in government regulations. This is not only unsurprising but understandable. AI has proven it’s here to stay — and it’s far better to regulate alongside its growth than to impose sweeping changes later that could disrupt how banking and financial institutions have come to rely on it. Consider that in 1970, there were approximately 400,000 restrictions in the Code of Federal Regulations and by 2019 that number had swelled to one million according to QuantGovt. There are now over 145,000 governing entities in the US – cities, counties, states, and federal entities –that pass 3,000 to 4,500 final rules each year according to The Office of the Federal Register.
LEGISLATIVE TREND: CONSUMER PROTECTION FROM AI –COLORADO, UTAH, AND CALIFORNIA
Of the 450 AI bills proposed by state lawmakers in 2024 and tracked by National Conference of State Legislatures (NCSL), consumer protection was one of the top three legislative trends of the year.
Colorado, Utah, and California were the first three states in 2024 to enact consumer protection regulations around AI and financial services.
COLORADO
In May, Colorado became the first state to enact AI legislation related to the development and deployment of AI systems, similar to the EU AI Act, with the Colorado Artificial Intelligence Act. It will take effect in February 2026.
According to KPMG, the new law “is directed to persons conducting business in Colorado as “developers” or “deployers” of “high-risk artificial intelligence systems” (all as defined in the law). It requires both developers and deployers to “use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination stemming from the intended and contracted uses of high-risk AI systems.”
UTAH
Enacted in March 2024, Utah's Artificial Intelligence Policy Act (SB 149) is the first comprehensive state AI legislation in the United States mandating that businesses maintain transparency and accountability in their use of generative AI technologies.
The article Utah Enacts First AI Law – A Potential Blueprint for Other States, Significant Impact on Health Care points out:
The Utah AI Policy Act covers any commercial communications using “generative” AI more broadly, the law defines generative AI as any “artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.” This definition would cover any real-time automated responses to user questions or “prompts” – ranging from marketing bots to appointment schedulers and customer service responses.
CALIFORNIA
Signed into law in 2024, California’s new “Generative Artificial Intelligence: Training Data Transparency” law requires developers of generative artificial intelligence (GenAI) systems and services must publicly disclose what data is used to train their systems.
According to Langlois Lawyers LLP ‘s summary of the new law “AI back in the regulatory spotlight: California passes new legislation” :
The rapid growth of AI-generated content poses numerous challenges, including the increasing difficulty for consumers to distinguish this type of content from human-created content. California aims to promote greater transparency from companies working in this field so that users can knowingly interact with AI-generated content and make informed choices.
California’s push to prioritize safe and transparent AI systems sets a strong precedent for other forward-thinking states considering similar safeguards. State-level transparency laws have often triggered a regulatory domino effect—eventually influencing federal action. While AI transparency remains primarily a state-driven effort for now, I believe it’s only a matter of time before it serves as a foundation for broader federal regulation.
While strict compliance is essential in the state where a firm is headquartered, institutions incorporating AI into their business will want to build an internal framework that makes it relatively seamless to operate in other jurisdictions. As of this writing more than 12 states have already followed suit including Virginia’s High-Risk AI Developer & Deployer Act (HB 2094).
CONCLUSION
In a year that has already upended many federal regulations, banks and other financial organizations do not have the luxury to wait and see what impact AI regulation will have on their businesses at the state level. With proposed AI legislation increasing nearly 75% in 2024 over 2023, stakeholders need to be vigilant. They also need to realize that the road ahead for AI regulations on their industry is going to be anything but straightforward. These early AI consumer protection regulations passed by Colorado, Utah, and California will inform other states in the months and years to come— and will likely influence future national policy decisions.
The most important steps stakeholders can take now is to make sure they are creating proactive strategies with policy teams (whether internal or hired as consultants), and deploying monitoring processes for regulations, legislation, and media. Investing more deeply across the proactive steps of shaping and preparing for public policy shifts will allow them to identify, analyze, and locate proposed regulatory changes that may become costly compliance requirements down the line. It’s much easier to adjust internal processes while public policy is in draft form, before it becomes codified and leads to costly fines. While it may be easy for banks to view state-level AI regulations as a burden, forward-thinking institutions will recognize it as an opportunity to turn inevitable regulatory “chaos” into a competitive advantage — setting themselves apart by building evergreen internal compliance frameworks that address the major themes of AI regulation across all states. This approach will enable peak compliance and agility for financial institutions and banks operating in multiple states, each with slightly different regulatory standards.
The opinions shared in this article are the author’s own and do not reflect the views of any organization they are affiliated with.
Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.
If an idea matters, you’ll find it here. If you find an idea here, it matters.
Interested in contributing to Open Banker? Send us an email at [email protected].