top of page
Search

One Rulebook or Legal Chaos: Trump's AI Executive Order Promises Clarity but Delivers Confusion

  • Writer: Yoshi Soornack
    Yoshi Soornack
  • 4 hours ago
  • 8 min read

The attempt to override state regulations might backfire spectacularly for project delivery leaders navigating AI governance. On 11 December 2025, President Trump signed an executive order that Silicon Valley had been lobbying for and state regulators had been dreading. Flanked by AI advisor David Sacks, Commerce Secretary Howard Lutnick, and Senator Ted Cruz in the Oval Office, Trump declared there would be only "one rulebook" for

AI regulation in America. No more navigating 50 different state regimes. No more patchwork of conflicting requirements. Just one clear federal standard.

Except there isn't one. And there might not be for years.


What Trump actually delivered is a declaration of intent to create a federal framework, eventually, through Congress, maybe. In the meantime, the executive order arms the administration with tools to challenge existing state laws through litigation, threaten federal funding cuts, and pressure states into compliance. Welcome to regulatory limbo, where the old rules might not apply but the new ones don't exist yet. For project delivery leaders implementing AI systems across multiple states, clarity would be welcome. What they got instead is chaos with a presidential seal.



ree


What the Order Actually Does

Let's cut through the rhetoric and examine the mechanics. The executive order directs three immediate actions: creating an AI Litigation Task Force within 30 days to sue states over their AI laws, tasking the Commerce Secretary with identifying state laws that conflict with federal AI policy, and conditioning federal broadband funding on states either not enacting conflicting AI laws or agreeing not to enforce existing ones.


The legal theory rests on the Commerce Clause, the constitutional provision giving Congress authority to regulate interstate commerce. As David Sacks argued, this domain of interstate commerce was "the type of economic activity that the Framers of the Constitution intended to reserve for the federal government to regulate." By this logic, because AI companies operate nationally, individual states shouldn't be able to impose their own rules.


The problem? That same logic would invalidate most state product safety laws, consumer protection regulations, and professional licensing requirements. As Mackenzie Arnold from the Institute for Law and AI pointed out, almost all state regulations affect companies that sell goods nationally. If the Commerce Clause prevents states from regulating AI, it would prevent them from regulating practically anything.


More importantly, the executive order lacks the force of law. As multiple legal experts told NPR, the administration cannot restrict state regulation without Congress passing legislation. The order can direct federal agencies to file lawsuits and threaten funding cuts, but it can't actually preempt state authority. That would require an act of Congress.

Which brings us to the uncomfortable truth: Congress has already rejected this approach. Twice.


Congress Already Said No

The Trump administration's first attempt to block state AI regulations came in July 2025, when Republicans tried to insert a 10-year moratorium on state AI law enforcement into the domestic policy reconciliation bill. The Senate voted nearly unanimously to remove it before the bill passed. Strike one.


The second attempt came just weeks before this executive order, when Republicans pushed to include AI preemption language in the National Defence Authorisation Act. Despite intense pressure from Trump, the White House, and the tech industry, congressional leaders declined to add the provision. Strike two.


The executive order represents strike three, attempting through executive action what Congress has twice refused to authorise through legislation. As Brad Carson from Americans for Responsible Innovation bluntly stated, the order will "hit a brick wall in the courts." The prediction isn't controversial. Constitutional law professors across the political spectrum agree the administration is on shaky legal ground.


The Global Context Matters More

At least Trump's order brings clarity to America's position, even if that position is "we don't want states regulating AI." In the global race for AI dominance, having any coherent national framework matters more than having none. But the executive order's actual effect is to freeze American policy in a holding pattern while litigation plays out and, maybe, eventually Congress acts.


What project delivery actually needs is global AI regulation. Wishful thinking? Absolutely. But it's the only way organisations can build AI systems sensibly and responsibly across borders. Instead, we're getting fragmentation in every direction. The EU has GDPR and the AI Act. China has its own regulatory regime. The US can't even agree between Washington and Sacramento, let alone with Brussels or Beijing.


The result is that major programmes implementing AI face an impossible matrix of compliance requirements that vary by jurisdiction, change constantly, and often conflict directly. The executive order doesn't solve this. It just adds another layer of uncertainty while the administration figures out whether it can actually do what it's threatening to do.



California's Governor Gavin Newsom, a Democrat and vocal critic of the president, accused Trump of bowing to the interests of tech allies with the executive order. "Today, President Trump continued his ongoing grift in the White House, attempting to enrich himself and his associates, with a new executive order seeking to preempt state laws protecting Americans from unregulated AI technology," he said.

What States Are Actually Regulating

To understand why the Trump administration is so keen to override state authority, look at what states have actually passed. California's law banning "algorithmic discrimination" requires AI models to avoid producing differential treatment or impact on protected groups. Colorado has enacted transparency requirements for automated decision systems.


Several states have passed legislation protecting children from AI harms. Others are working on deepfake regulations and employment discrimination protections.

The executive order explicitly exempts specific categories from federal override: child safety protections, data centre infrastructure regulation, state procurement of AI services, and a few other areas. But the Commerce Secretary is tasked with identifying laws that "require AI models to alter their truthful outputs" or mandate disclosure in ways that "would violate the First Amendment or any other provision of the Constitution."


Here's where it gets interesting. Take California's algorithmic discrimination law. If an AI model produces outputs that disproportionately affect protected groups, does requiring the model to adjust those outputs constitute forcing it to "alter truthful outputs"? The administration appears to think yes. States argue that preventing discrimination isn't the same as mandating falsehood. Courts will spend years sorting this out.


Meanwhile, project delivery teams implementing AI for hiring decisions, credit scoring, healthcare triage, or any other high-stakes application face a choice: comply with state laws that might be challenged, or ignore them and risk enforcement. Neither option is good.


The Child Safety Exception Nobody Trusts

David Sacks emphasised during the signing ceremony that the administration will not push back on state-level regulation around child safety and AI. This exemption was politically necessary; even Trump's strongest supporters baulk at appearing anti-child protection. But it's also nearly meaningless in practice.


Michael Toscano from the Family First Technology Initiative, a conservative think tank, called the executive order "a huge lost opportunity" for the Trump administration to lead on protecting children from AI harms. Organisations working on bipartisan child safety legislation watched the executive order process unfold with growing alarm, excluded from consultations that centred almost entirely on industry concerns.


The problem with the child safety exemption is definitional. What counts as child safety regulation versus general AI safety regulation? If a state requires AI systems to detect and prevent the distribution of child sexual abuse material, that's clearly covered. But what about requirements that AI tutoring systems be designed with child development principles? Restrictions on AI-powered advertising targeting minors? Or mandatory parental controls on AI assistants?


The executive order doesn't define the boundaries, which means more litigation, more uncertainty, and more risk for anyone trying to build compliant systems.


Federal Funding as Leverage

Perhaps the most immediate impact of the executive order involves federal funding conditions. The Commerce Secretary must issue a policy notice within 90 days outlining the eligibility conditions for states to receive the remaining Broadband Equity, Access and Deployment (BEAD) funding to expand internet access. The notice must describe how "a fragmented State regulatory landscape for AI threatens to undermine BEAD-funded deployments."


This creates a direct financial incentive for states to either not enact AI laws or not enforce them. The executive order also directs federal agencies to assess discretionary grant programmes and determine whether they can condition grants on states either not enacting conflicting AI laws or entering binding agreements not to enforce existing laws during the grant period.


States face a choice: maintain AI regulations and lose federal funding, or abandon them and keep the money. It's classic coercive federalism, using the power of the purse to achieve policy outcomes that can't be achieved through legislation. The Supreme Court has set some limits on this practice, ruling that conditions on federal funding can't be so coercive that they effectively commandeer state policy. Whether AI regulation crosses that line will be litigated for years.


In the meantime, state budget offices will be calculating whether holding firm on AI regulation is worth losing hundreds of millions in broadband funding. The answer will vary by state, creating even more fragmentation as some states cave and others hold out.


What Happens Next

The executive order kicks off a multi-year process that will create far more uncertainty than it resolves. The AI Litigation Task Force will identify state laws to challenge, file suits, and begin working through district courts, appeals courts, and potentially the Supreme Court. Some cases will take five years or more to resolve. Others might never reach a final judgment.


The Commerce Secretary will identify and evaluate existing state laws, a process requiring analysis of legislation across all 50 states and hundreds of localities. Some laws will be obvious targets; California's algorithmic discrimination requirement tops that list. Others will fall into grey areas requiring legal judgment calls. The resulting report will be challenged by states, creating another layer of litigation.


Congress might eventually pass legislation creating the federal framework the executive order references, but lawmakers have shown little appetite for moving quickly on AI policy. The executive order directs the White House to prepare legislative recommendations, but recommendations aren't votes. With Congress divided and AI policy cutting across traditional partisan lines, comprehensive federal legislation could be years away.


Most importantly, states have already signalled they'll keep passing AI laws regardless of the executive order. As Stateline reported, state lawmakers of both parties plan to continue developing AI regulations. They argue that waiting for federal action means waiting indefinitely, while deployed AI systems are already causing real harms that need to be addressed now.


The Project Delivery Nightmare

For project delivery leaders, this executive order creates a compliance nightmare. The old state regulations might be challenged, but remain on the books until courts rule. New federal standards don't exist yet and might not for years. In the interim, you're navigating:

State laws that might be unconstitutional but are currently enforceable. Federal threats to sue states, the outcomes of which are unpredictable. Funding conditions that might force some states to abandon regulations. Ongoing state legislative activity is creating new requirements. No clear federal framework to provide certainty.


Regulatory chaos where the only certainty is uncertainty. Major programmes implementing AI across multiple states face impossible choices about which jurisdictions to prioritise, which requirements to follow, and how much risk to accept.


The rational response might be to pause AI deployment until the dust settles. But that could mean waiting half a decade while courts work through these questions. Meanwhile, organisations that move forward accept significant compliance risk. Those who hold back sacrifice competitive advantage. There's no good option.


The Global Regulatory Void

Stepping back, the executive order highlights how far we are from coherent global AI governance. The technology is borderless. Regulations are fragmented. Companies operate across jurisdictions. Projects span continents. But there's no international framework, no global standards, and no prospect of achieving them.


The EU's AI Act creates requirements for companies operating in European markets. China's regulations govern AI deployed in its territory. The US can't agree on whether states or the federal government should regulate AI, let alone what those regulations should say. Meanwhile, AI development accelerates, capabilities expand, and deployment outpaces governance everywhere.


For project delivery, this means building compliance frameworks that can adapt to multiple jurisdictions, change constantly, and accommodate conflicting requirements. It means accepting that perfect compliance might be impossible and that risk management becomes paramount. It means that every major AI implementation requires legal teams, compliance specialists, and regulatory experts embedded in project governance from the start.


The executive order promised clarity. What it delivered is confusion with extra steps. The only certainty is that AI regulation in America will be tied up in litigation for years while Congress debates legislation that might never pass. States will keep passing laws. The administration will keep challenging them. Courts will issue conflicting rulings. And project delivery teams will be left trying to navigate a regulatory landscape that makes sense to nobody. At least that brings one kind of clarity: knowing we're on our own.


Navigating regulatory uncertainty requires staying informed. Subscribe to Project Flux for analysis that cuts through policy chaos and focuses on what project delivery leaders actually need to know.









 
 
 

Comments


bottom of page