• Open Banker
  • Posts
  • California, Not Washington, Will Shape AI Policy

California, Not Washington, Will Shape AI Policy

Written by Pat Utz

In partnership with

Pat Utz is CEO & Co-Founder of Abstract. He founded Abstract alongside Matthew Chang (COO) in 2020 and has since grown into a VC-backed company with teams in New York and Los Angeles.

Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.

Since President Trump's inauguration exactly a year ago, a battle has raged between the administration and the states over who controls AI policy. The administration struck first with its July AI Action Plan that declared the Federal government's "right" to own AI policy over the states. Three months later, California responded by signing into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), one of the most consequential state-level AI laws ever enacted. The administration escalated in December with an executive order seeking to preempt state AI laws through litigation threats and funding restrictions. Days later, New York signed into law the Responsible AI Safety and Education Act (RAISE Act).

It wasn't just California and New York seeking to set their own direction on AI policy in 2025. All 50 states and territories introduced over 1,200 AI bills and enacted 145 laws — nearly double the bills introduced and 46% more laws enacted than in 2024.

California’s Power Move

California’s eagerness to create and set AI policy shouldn’t be surprising considering it is the fourth largest economy in the world, home to 32 of the 50 top AI companies, and leads the U.S. in demand for AI talent. 

California became the first U.S. state to regulate advanced AI systems with the Transparency in Frontier Artificial Intelligence (TFAIA) which according to a press release from Governor Newsom was designed to “enhance online safety by installing commonsense guardrails on the development of frontier artificial intelligence models, helping build public trust while also continuing to spur innovation in these new technologies.”  

Scott Singer and Alasdair Phillips-Robin, fellows in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, point to TFAIA as the first US law that addresses “potentially catastrophic risks from advanced AI systems.” Specifically, Singer and Phillips-Robin explain in their article:

The law introduces protections for whistleblowers inside AI labs, mandatory reporting of certain safety incidents, and requirements that large developers publish so-called frontier AI frameworks to explain how they plan to mitigate catastrophic risks.

In addition, the law provides the California Attorney General with enforcement authority, including the ability to impose civil penalties for instances of noncompliance. This indicates the state’s commitment to proactive regulatory oversight rather than reliance on voluntary adherence to guidelines.

Furthermore, Singer and Phillips-Robin write:

As the home of many of the world’s largest and most important AI companies, California has a unique role in AI policy. It is one of two jurisdictions with the greatest capacity to enact legally binding policies that affect frontier AI developers. The other, of course, is the U.S. federal government. But Washington, so far, has largely declined to act. That gives California huge influence over national, and even global, AI policy. SB-53 could provide a blueprint for other states and governments to follow including, perhaps, a future Congress. 

Leading on policy isn’t new for California. The state has repeatedly been ahead of Washington in setting major regulatory agendas — from consumer privacy to climate policy. California’s Consumer Privacy Act (CCPA) became the model for other states’ data privacy laws, just as its environmental and vehicle emissions standards have long driven federal action. The state’s ability to create and implement policy and then have other states model their own policies after California, known as the California Effect.

States Drive an Unprecedented Wave of AI Laws

While California is leading on setting the direction on AI policy, the rest of the states have not been quiet. During the 2025 legislative session, every state, Puerto Rico, the Virgin Islands, and Washington D.C. introduced or passed AI legislation, a 100% increase over 2024. According to the National Conference of State Legislatures, 38 states adopted or enacted approximately 118 AI measures. 

The key AI concerns that legislators moved on in 2025 were:

  • Protecting against deepfakes and synthetic media; 

  • Requiring transparency and accountability in AI systems; 

  • Protecting employment and labor rights; 

  • Mitigating bias in AI systems and protecting civil rights; and

  • Safeguarding children and vulnerable populations.

Since the introduction of California SB 53, three states took mirroring approaches:  

  • New York signed into law the Responsible AI Safety and Education Act (RAISE Act, S 6953B) making New York the second state to enact comprehensive frontier legislation. New York Governor Kathy Hochul stated, “By enacting the RAISE Act, New York is once again leading the nation in setting a strong and sensible standard for frontier AI safety, holding the biggest developers accountable for their safety and transparency protocols.” Hocul also commented that the new law “builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public.”

    According to the Transparency Coalition, the RAISE act focuses “on ensuring the safety of AI models that cost more than $100 million to train or exceed a certain computational power. The legislation aims to prevent future AI models from unleashing “critical harm,” defined as the serious injury or death of 100 or more people or at least $1 billion in damages.” 

  • Michigan introduced the AI Safety and Security Transparency Act (House Bill 4668)that seeks to require large developers to implement safety and security protocols to manage critical risks of foundation models; to prescribe the duties of large developers; to provide protection for certain employees; to provide for the powers and duties of certain state and local governmental officers and entities; and to prescribe civil sanctions and provide remedies.

  • The Virginia legislature passed the High-Risk AI Developer & Deployer Act (HB 2094) that closely resembles the transparency and obligation approach proposed in SB-53. While the governor vetoed it, a new Democratic governor increases the chances that it is enacted in 2026.

Five states have enacted significant AI legislation following Trump’s AI Action Plan roll out:

Trump’s AI Action Plan Stalls 

Contrast state-level action with the Federal side, where there has been very little movement since July. The AI Action Plan called for the Office of Management and Budget to issue detailed guidance for agencies within 120 days (by November 20, 2025) and later delayed until December 11, 2025, due to the government shutdown, furloughs, and job reductions. Axios reported that the Commerce Department laid off up to 600 employees central to executing the AI Action Plan.

The AI Action Plan calls for increasing federal AI adoption, promoting standards, and deregulation. But according to Bloomberg’s Oma Seddig, “President Donald Trump’s “AI Action Plan” calls for increasing federal AI adoption, promoting standards, and deregulation.” But, Seddig continues, “Agencies responsible for carrying out those objectives don’t have the funding to continue operations, and Congress has yet to move forward policies in support.”

However, no such guidance has been released as of today. Instead, the Trump administration’s December 11 executive order focused on preempting state AI laws through litigation threats, funding restrictions, and federal agency action — not on implementing the Action Plan's federal AI adoption goals.

California's regulatory prowess and AI dominance, coupled with other states plowing ahead, give AI policy momentum to the states. This pattern mirrors a broader 2025 trend that was documented in "The Great Dismantling” which found that states "rushed to fill the void" left by federal regulatory retreat. (Disclosure: The report was recently released by Abstract, the company I co-founded.)  Unless the Federal government passes comprehensive legislation through Congress, the future of AI policy will continue being written in Sacramento, Albany, and Denver — not Washington.

The opinions shared in this article are the author’s own and do not reflect the views of any organization they are affiliated with.

Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.

If an idea matters, you’ll find it here. If you find an idea here, it matters. 

Interested in contributing to Open Banker? Send us an email at [email protected].

Global HR shouldn't require five tools per country

Your company going global shouldn’t mean endless headaches. Deel’s free guide shows you how to unify payroll, onboarding, and compliance across every country you operate in. No more juggling separate systems for the US, Europe, and APAC. No more Slack messages filling gaps. Just one consolidated approach that scales.