We’ve seen this before: fear, overregulation, and political hubris at the dawn of an economic breakthrough. Will we learn from history — or repeat its mistakes?
In every era of transformation, one thing never changes: politicians panic.
When the agricultural revolution spread through Europe and into the early American colonies, it upended social orders. Elites feared farmers with new tools would abandon the land. During the Industrial Revolution, steam power and factory machines were labeled threats to jobs and morality. And in the digital age, the rise of the internet was met with a flurry of federal blue-ribbon panels warning about everything from pornography to “cyberspace addiction.”
Now comes artificial intelligence — the next great leap in human productivity — and the cycle is repeating itself. Only this time, the stakes are global, and the timeline is faster.
Last week, the Trump administration released its AI Action Plan — a significant course correction from the Biden years that scraps top-down federal control and embraces innovation through deregulation, investment, and infrastructure. It’s one of the most pro-market approaches to AI policy we’ve seen from any Western government.
However, while the plan’s vision is mostly correct, it lacks a crucial protection: a federal moratorium on state-level AI regulation. That omission leaves the door wide open for 50 different states to suffocate innovation under 50 different bureaucratic regimes.
This Moment Is America’s to Lose
Artificial intelligence isn’t a robot uprising — it’s advanced computing, a continuation of decades of machine learning, data science, and automation. Like past revolutions, it is a general-purpose technology with a massive upside: from medical diagnostics to precision agriculture, logistics, and personalized education.
This is the beginning of a transformation on par with the printing press, the steam engine, and the internet. Yet like every leap forward, it’s greeted with a mix of excitement and fear. And fear tends to invite government overreach.
Entrepreneurs don’t know exactly what AI will look like ten years from now — but they have vastly more insight than politicians ever will, because they live in the feedback loops of real-time trial and error. Markets discover. Governments delay.
Just look at where capital is going: AI startups raised $104 billion in the first half of 2025 alone — more than in any full year prior. Guggenheim analysts expect even larger gains ahead as enterprise adoption continues to soar.
This may not be a bubble but a boom. And America is positioned to lead — if we don’t regulate ourselves into stagnation.
Can Washington Rein in the States?
Some may question whether the federal government has the constitutional authority to stop states from regulating AI. The answer is yes — when interstate commerce is clearly involved, as it is with nearly every AI tool, system, and application. From cloud infrastructure to multi-state model deployment to international data flows, AI is not a local matter. It’s a global one.
The US Constitution empowers Congress to “regulate Commerce… among the several States,” and the courts have long upheld federal preemption in nationally integrated markets. In Gibbons v. Ogden (1824), the Supreme Court made it clear: when a state law interferes with the free flow of interstate commerce, the federal government has both the right and duty to act.
As James Madison wrote in Federalist No. 42, the Commerce Clause was essential to “guard against the many practices… which have hitherto embarrassed the intercourse of the States.” That applies perfectly to today’s AI patchwork.
While states have roles to play, they should not be left to erect legal walls around innovation or preempt national policy through fear and overreach. A temporary moratorium, while aggressive, would be both constitutional and necessary to ensure America doesn’t fumble the biggest economic opportunity in a generation.
The Trump Plan Is Pro-Growth, but the States Are a Risk
Fortunately, Trump’s new Executive Order 14179 repealed Biden’s restrictive EO 14110, which had empowered bureaucrats to embed fairness checks and ideological audits in AI tools. That approach mirrored the EU’s bloated AI Act — and would have ensured US developers got bogged down in red tape while China continued to advance.
The new federal plan rejects that path. It commits to:
- Cutting permitting delays for AI infrastructure and semiconductor fabs
- Encouraging open-weight AI models to foster competition
- Expanding workforce education and employer training with fewer tax penalties
- Prioritizing innovation over precaution
This is the right vision. But without a preemptive strike against state overreach, it may be impossible to implement in practice.
The House-passed version of Trump’s One Big Beautiful Bill (OBBB) included a federal moratorium on new state AI regulations. That language was dropped by the Senate before final passage. The expected result is chaos.
State Capitols Are Legislating Blind
In 2025 alone, more than 1,000 AI-related bills were introduced across all 50 states — up from nearly 700 in 2024, according to the James Madison Institute. The National Conference of State Legislatures confirms that dozens of those have already become law.
Here’s the problem: these aren’t coherent guardrails — they’re preemptive policy panic.
- California wants every AI output evaluated for “equity harms,” whatever that means.
- New York is pushing for AI licensing boards and pre-release approvals.
- Even Texas, which often leads in free-market policy, passed HB 149 (TRAIGA), creating a new bureaucracy to oversee so-called “high-risk” AI applications. It’s better than where it started, but still opens the door to creeping state control.
This is not just regulatory noise — it’s a threat to the scalability of American innovation. It fragments compliance and deters investment, particularly in open-source or startup environments.
Don’t Let History Repeat
Every time a new tool comes along that threatens old structures, lawmakers feel the need to “do something.” But as Milton Friedman taught us, the government solution to a problem is usually worse than the problem.
In truth, the best response to AI fear may be no response at all — at least not yet. We already have laws on the books to address fraud, discrimination, theft, and safety. We don’t need to build new bureaucracies to police speculative harms that may never materialize.
The biggest risk is not that AI goes rogue. It’s that our political class chokes off its development with regulatory hubris.
The EU is already moving in that direction. And while China may look fast on the surface, it’s doing so through central planning and repression, which ultimately stifles the kind of open innovation that gave the world the microchip and the internet.
America can still lead. But it must do so by trusting markets, not mandates.
Let Parents and Entrepreneurs Lead
Rather than preemptively outlawing AI tools in classrooms or forcing private businesses to submit models for approval, we should let parents, workers, students, and entrepreneurs decide what works best for them.
We don’t need governors and attorneys general positioning themselves as AI overlords to score political points. We need an environment where knowledge creation is decentralized, experimentation is encouraged, and failure is an integral part of the process.
Just as in the past, those who fear the new are demanding power over it. But history tells us: the real danger is not the technology — it’s the legislation that follows fear.
Give Innovation a Fighting Chance
The Trump administration’s AI Action Plan is a welcome course correction. It prioritizes innovation over regulation, removes ideological roadblocks, and trusts the market to do what it does best — discover, adapt, and grow.
But unless Congress follows through with a moratorium on new state AI laws, this moment of opportunity will collapse under a pile of conflicting mandates and political micromanagement. We can’t lead the world while tripping over our own red tape.
We’ve seen this before. Every great economic revolution — agriculture, industry, technology — was nearly smothered by fear and top-down control. We can’t afford to make the same mistake with artificial intelligence.
Let’s stop pretending politicians know what’s coming next. They don’t. Entrepreneurs have a better shot — not because they’re perfect, but because they’re accountable to reality, not to reelection.
Congress should act now. Delay the deluge of state AI regulation. Let existing laws do their job. And give this generation’s innovators the space to build the future. That’s how America wins the AI race — not with more government, but with more freedom.