AI for Federal Rulemaking

Written by Mike G. Silver

In partnership with

Mike G. Silver is a partner at Spencer Fane in their Washington, DC, office, where he advises financial services and fintech companies on regulatory compliance, product design, and policy advocacy matters. Mike previously served more than 12 years at the Consumer Financial Protection Bureau (CFPB), working under six CFPB directors and three Presidents to build the agency and shape its policy and rulemaking initiatives.

Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.

In the 1980 all-time classic comedy Airplane, Captain Ted Striker is reluctantly called into duty to land the plane after all three pilots fall ill after consuming bad fish (including Showtime-era Lakers superstar Kareem Abdul-Jabbar). Striker, overcoming the trauma of crashing his aircraft during the war, manages to execute a messy but ultimately successful landing.

After all the passengers deplane, another character re-emerges: Otto, the inflatable autopilot. First seen mid-movie getting frisky with a flight attendant, as the final credits roll, Otto guides the plane to take off into the night, winking at the camera. The scene, of course, is mainly present for comedic effect, but did it portend events 45 years down the line?

Nothing is funny about the profound disruption happening right now in Washington, DC. The federal government just endured a record-long shutdown. The acting director of the Consumer Financial Protection Bureau (CFPB), where I served for more than 12 years, has taken a series of rapid-fire steps to shutter the agency by year-end. These actions align with their newfound view, bolstered by a Department of Justice legal opinion, that the CFPB can no longer request funds from the Federal Reserve Board because the Fed lacks “combined earnings” from which the CFPB may draw funding under the Dodd-Frank Act. Across federal agencies, mass reductions in force, significant funding cuts, and wild policy fluctuations are prevalent. 

The AI Deregulation Decision Tool: Delete a Rule in 2.4 Hours!

At root, the Trump administration is undertaking an aggressive, and in some ways unprecedented, effort at deregulation. 

This past summer, the Washington Post published a story about the Department of Government Efficiency (DOGE) spearheading an effort to accelerate and streamline these deregulatory efforts. How? By adopting the “DOGE AI Deregulation Decision Tool.” The CFPB along with the Department of Housing and Urban Development (HUD) were cited as pilot agencies for this initiative. According to the Post, “The tool has already been used to complete ‘decisions on 1,083 regulatory sections’” at HUD in under two weeks and “to write ‘100% of deregulations’” at the CFPB. The article included a link to a DOGE PowerPoint presentation that provided granular detail about the AI tools being deployed and the assumptions being made about the resulting efficiency gains.1  

As a veteran of the CFPB’s Office of Regulations, the numbers in the DOGE deck startled me. According to DOGE, revoking a section of federal regulation requires 30 attorney hours and 6 policy analyst hours as a baseline. Those 36 hours included the entire rulemaking process (more on this below), not just the mechanical exercise of deleting the offending regulation. The AI tool would reduce those 36 hours to 2.4 hours — a 93% savings. 

I attribute the AI tool savings estimate to overoptimistic assumptions, and perhaps puffery. But what really stood out was the baseline assumption. 

To deconstruct this assumption, let’s take one example — Regulation Z, the implementing regulation for the Truth in Lending Act (TILA).2 One key provision, 12 CFR 1026.36(d)(1), bans companies from tying their loan officer compensation to interest rates and other loan terms.3 In its most recent Unified Rulemaking Agenda, CFPB announced that it is considering a rescission of this provision, which dates to the agency’s 2013 loan originator compensation rule (I was one of the authors).4 This provision codified certain Dodd-Frank Act changes to TILA and built on the Federal Reserve Board’s existing rules.5 Industry certainly has valid concerns with certain aspects. But rescinding (rather than merely amending) this provision will raise a variety of challenging policy and drafting issues, including accounting for how if the provision is revoked, the statute — which is far more onerous than the rule — would automatically kick in. 

It took a team of seven attorneys and economists working nearly full-time for over a year to write the CFPB rule. A much leaner team can write a revocation rule. Nevertheless, to these eyes,6 assuming the entire endeavor takes only 36 hours total without AI is highly dubious.

When a Proposal Becomes a Rule

To step back, let me briefly pay homage to the famous Schoolhouse Rock video “When a bill becomes a law” and remind readers of how a proposal becomes a final federal regulation.    

For an agency to write a legislative rule (the most formal type of agency action), it must do three things. First, publish a Notice of Proposed Rulemaking, or NPRM. (In some cases, the agency starts with an Advance Notice of Proposed Rulemaking (ANPR) or Request for Information (RFI)). Second, solicit and consider public comments. And third, publish a final rule. To revoke a rule, an agency must follow those same processes and also explain why it is pivoting.7  

These may sound like simple steps if you focus only on the technical drafting or deletion of the regulation. In other words, rulewriting can be straightforward, especially if the text of the provision in question is short. Even with more fulsome regulations, an AI tool surely can be prompted to produce a redlined version of a regulation that achieves a deregulatory policy objective. A first cut, followed by an attorney review — just as envisioned by the DOGE deck. 

Rulewriting, however, is just one component of the larger cross-disciplinary and multi-step process known as rulemaking. And the rulemaking process is governed legally by the Administrative Procedure Act (APA). The APA ensures that agency rulemaking is reasoned, adequately explained, and sufficiently responsive to public feedback. It provides a bulwark against arbitrary and capricious action or where the agency prejudges the outcome. As one seminal case put it, the agency cannot approach a rulemaking process with an “unalterably closed mind.”8

APA compliance, moreover, is just a minimum legal requirement. Agencies must also publish an economic analysis of the costs and benefits of the proposed regulation, and under the Regulatory Flexibility Act,9 they must consider the implications of rules on small entities. Some agencies have additional, discrete statutory requirements for rulemaking. For example, prior to issuing an NPRM for any rule with significant economic impacts on small entities, the CFPB, along with only two other agencies, must convene a “Small Business Review Panel” or “SBREFA” process, through which the CFPB gathers input from small business representatives in conjunction with the Small Business Administration (SBA) and the Office of Management and Budget (OMB).10 The agency must compile that input into a report published jointly with SBA and OMB.  

Another example is how the Federal Trade Commission (FTC) must, under Section 18(b) of the FTC Act, issue an ANPR before releasing a proposal, and for certain rules, follow other procedural requirements under the Magnuson-Moss Warranty Act.11 Failure to comply can be damaging, as seen earlier this year when the Fifth Circuit vacated the FTC’s Combating Auto Retail Scams Trade Regulation Rule (CARS Rule) because the agency hadn’t issued an ANPR.12

Due to these interlocking procedural mandates, agencies engaging in rulemaking must issue multiple deliverables over a period of many years. Depending on the rulemaking’s complexity and scale, each deliverable may be extremely lengthy and contain a multitude of legal, economic, and market analyses. These documents can be a slog to read — and as someone who helped author a dozen CFPB proposed or final rules, I can tell you they are an even bigger slog to draft. 

Deploying AI to Upend the Status Quo of Bureaucratic Quicksand

All this must make your head spin. Almost two years removed from the CFPB, with now some distance from my former role, it makes my head spin as well.  

The federal rulemaking process can be a laborious, turgid exercise. What’s even worse is that it is all by design. When a rule can take a decade or more to develop, write, and take effect — and that assumes it even passes a legal challenge in an increasingly unsettled administrative law environment — it validates stereotypes about the inefficiency, bureaucratization, and unresponsiveness of government. I am certainly not arguing for perpetuation of the status quo. 

Can deployment of large language models (LLMs) and other AI tools make this byzantine process more efficient? Absolutely! The federal government should do everything possible to streamline the components of the rulemaking process that best lend themselves to streamlining. As a side benefit, using AI will help regulators get more comfortable with the technology and, in turn, better positioned to address the policy challenges posed by AI itself.

Two excellent use cases for agency rulemaking AI deployment are processing NPRM public comments and producing the SBREFA report. AI tools can take the first cut at categorizing and summarizing what may be hundreds of thousands or even millions of public comments (especially as AI starts writing comments). Moreover, in a final rule’s preamble, the agency describes each proposal component and summarizes the comments, before later offering substantive responses. I worked on the CFPB’s 2017 payday lending rule, which generated over 1.4 million public comments.13 The volume was so overwhelming that an army of contractors was needed to help process the comments. AI tools can relieve the rule team from these rote exercises and allow them to focus on synthesizing the comments and integrating them into the final policy. 

The SBREFA report is another example. The report is largely non-analytical and follows a standard template. One chapter describes the small business feedback. Another section outlines the SBA industry codes specifying the industry sectors that may be affected. The report also appends written comments from small entities. Al can handle the bulk of these tasks, with staff freed up to concentrate on the NPRM.

Remember, Though, That We’re Only Human 

Here is what LLMs and other AI tools cannot do at the present (and I would argue they won’t be able to even as they get more sophisticated): They are not capable of supplanting the internal policy development process — the give-and-take between agency staff and leadership about options, and between the drafting team and the other offices to ensure that the rule adequately incorporates perspectives outside of the office producing the rule. This includes ensuring institutional knowledge that is not memorialized (and therefore is non-capturable by an LLM) or insights about potential unintended consequences are integrated into the process. 

Nor can AI replicate the important and dynamic dialogue between agency officials and external stakeholders that takes place before a proposal release and during the ex parte period after the release. Those competing interests need to weigh in and represent themselves directly. While AI tools increasingly can exercise judgment, at this stage they remain prone to hallucinations. Their outputs also reflect the subjective framing of the prompts, which could lead to rules that are based on false premises, incomplete problem statements, or incorrect citations.  

As Kafkaesque as it can be, the federal agency rulemaking process requires fundamentally human interactions and exercises — relationship building, politics, consensus seeking, negotiation, and exercise of policy judgment. Perhaps I underestimate the capabilities of the AI Deregulation Decision Tool or lack sufficient imagination about how emerging AI tools can replicate these human elements. I accept that this article might age like curdled milk and be obsolete in six months. But based on my rulemaking experience, I’m willing to make that bet.

There’s a final, larger context here of which we should not lose sight. The current administration is aggressively pushing these efforts. But going forward, agencies under any administration will be tempted to deploy AI tools to accelerate the rulemaking process. Given our hyperpolarized political environment, and with the Supreme Court having just heard oral argument in a case that may result in overturning the key precedent that has underpinned agency independence for almost 100 years,14 the vicissitudes and leisurely nature of the rulemaking process simply will not be tolerated. Agency leaders who no longer need to seek bipartisan consensus and who operate on politically driven timeframes will demand results quickly. We saw this at the CFPB under the prior administration after the Supreme Court’s 2020 Seila Law decision allowed the President to fire the CFPB director at will. The pendulum has swung hard, and it will swing back hard again. 

Conclusion

Agencies should heed the unintended lessons of Airplane: Even though Otto could carry out the pilot duties temporarily midflight and at the end, it was Striker’s very flawed but very human experience that allowed everyone to avert disaster.  

With federal rulemaking, AI can help make the process go faster and smoother. It can add tremendous value when deployed smartly. But AI is not an elixir. It shouldn’t be overhyped as the cure-all for everything that ails the government. And it can’t replace the many elements of the process that require fundamentally human interaction and judgment. Assuming that AI can do that will result in agencies producing deliverables that are sloppy, thinly reasoned, and lacking in rigor and appreciation of the nuances of public input. Garbage in, and garbage out. And even if agencies in a hurry to enact their policy agendas are willing to sacrifice quality for expediency, judges evaluating inevitable APA claims will not be so generous.15 This may especially be true of the CFPB, where AI may be viewed as the way to resolve the paradox of the agency powering up its policy agenda while simultaneously depriving the agency of funding and staff.

In the end, good government is not achieved by compromising the integrity of the rulemaking process. It happens by figuring out how and when to deploy AI responsibly and thoughtfully, while continuing to recognize the value of humans to the process.

The opinions shared in this article are the author’s own and do not reflect the views of any organization they are affiliated with.

[1] Wired also reported in April about another tool called SweetREX being deployed for similar purposes.

[2] 15 U.S.C. 1601 et seq.  

[3] 12 CFR 1026.36(d)(1).

[4]  CFPB, Agency Rule List - Spring 2025, “Loan Originator Compensation Requirements under the Truth in Lending Act: Rescission,” available at this link.

[5] 78 FR 11280 (Feb. 15, 2013) (final CFPB rule); 15 U.S.C. 1639b (section 1403 of the Dodd-Frank Act); 75 FR 58509 (Sept. 24, 2010) (final Federal Reserve Board rule).

[6]  As hilarious as nearly every scene is in Airplane, this scene from Superbad is even funnier.

[7] This article assumes the APA applies to federal agency actions as it traditionally has done for decades. The Trump Administration has asserted authority, through a September 2025 Executive Office of the President Memo implementing Executive Order 14192 (“Unleashing Prosperity Through Deregulation”) and an April 2025 Presidential Memorandum (“Directing the Repeal of Unlawful Regulations”), that agencies are to “aggressively and quickly withdraw regulations that are facially unlawful in light of recent Supreme Court precedent” and has provided guidance to agencies to use the APA’s “good cause” exception to bypass notice-and-comment processes when such situations occur. Of course, the decision as to whether individual regulations are, in fact, unlawful under Supreme Court precedent involves subjective legal judgment which, in my view, should be tested under the APA’s traditional processes. Some commentators have argued that this step by the administration will “end[] public input into the most disputed, controversial and consequential federal rules.”

[8] See Ass’n of Nat. Advertisers, Inc. v. FTC, 627 F.2d 1151, 1171 (D.C. Cir. 1979).

[9] 5 U.S.C. 601 et seq.

[10] See generally Small Business Regulatory Enforcement Fairness Act of 1996, or SBREFA, Pub. L. 104-121, codified at 5 U.S.C. 601-612.

[11] Magnuson-Moss Warranty—Federal Trade Commission Improvement Act, 15 U.S.C. §§ 2301–2312.

[12] Nat’l Ass’n of Automobile Dealers v. Federal Trade Comm’n, 127 F.4th 549 (5th Cir. 2025).

[13] 81 FR 47864 (July 22, 2016), Docket No. CFPB-2016-0025-0001.  

[14] The case before the Supreme Court involves President Trump’s firing of FTC Commissioner Rebecca Slaughter. In addition to Slaughter, President Trump has fired without cause Democratic commissioners or board members on the National Credit Union Administration, the National Labor Relations Board, and other agencies even though by statute they can be fired only for cause. The administration has been transparent that these actions reflect their position that Humphrey’s Executor, the seminal 1935 Supreme Court opinion protecting agency independence, should be overruled. The Supreme Court has signaled receptiveness to the administration’s arguments by rejecting, via the “shadow docket,” the terminated members’ efforts to stay in their positions while the appeals are decided.

[15] Of course, even judges are not immune from the temptations of taking shortcuts utilizing AI.

Open Banker curates and shares policy perspectives in the evolving landscape of financial services for free.

If an idea matters, you’ll find it here. If you find an idea here, it matters. 

Interested in contributing to Open Banker? Send us an email at [email protected].

One major reason AI adoption stalls? Training.

AI implementation often goes sideways due to unclear goals and a lack of a clear framework. This AI Training Checklist from You.com pinpoints common pitfalls and guides you to build a capable, confident team that can make the most out of your AI investment.

What you'll get:

  • Key steps for building a successful AI training program

  • Guidance on overcoming employee resistance and fostering adoption

  • A structured worksheet to monitor progress and share across your organization