Trump’s Call for Federal AI Standards: How State-by-State Regulation Threatens U.S. Innovation and Global Competitiveness
President Donald Trump has reignited a critical debate over artificial intelligence regulation in America, warning that the current patchwork of state-level AI laws threatens to undermine U.S. innovation and hand China a competitive advantage in the global AI race. In a forceful Truth Social post this week, Trump urged a single federal standard for AI oversight that blocks states from implementing their own divergent regulatory frameworks.
“Investment in AI is helping to make the U.S. Economy the ‘HOTTEST’ in the World — But overregulation by the States is threatening to undermine this Major Growth Engine,” Trump declared, advocating for federal preemption instead of what he characterized as a patchwork of 50 different state regulatory regimes. The president’s warning was stark: “If we don’t, then China will easily catch us in the AI race.”
Update – November 20, 2025: The Trump administration’s push for federal AI preemption has escalated dramatically with a draft executive order that would deploy the full weight of the federal government against state AI regulations (see here for a copy of the draft). The leaked six-page draft order, titled “Eliminating State Law Obstruction of National AI Policy,” directs Attorney General Pam Bondi to establish an AI Litigation Task Force at the Department of Justice with the sole mission of challenging state AI laws on constitutional grounds, including violations of the Commerce Clause and federal preemption doctrines. The order would also authorize federal agencies to withhold funding from states maintaining AI regulations deemed inconsistent with federal policy, and directs the Federal Trade Commission to issue guidance on how its unfair and deceptive practices authority could preempt state measures. Simultaneously, House Republican leadership is reportedly preparing to insert a five-year moratorium on state AI laws into the must-pass National Defense Authorization Act, reviving a proposal that the Senate rejected 99-1 earlier this year. The dual-track approach has triggered immediate opposition from California’s Privacy Protection Agency, which issued a forceful statement warning that the proposals “would rob millions of Californians of rights they already enjoy, leaving consumers vulnerable during a time of rapid technological change.” CPPA Executive Director Tom Kemp argued that California has proven “it is possible to support innovation while providing consumers with critical privacy protections,” emphasizing that states must retain authority to address evolving privacy threats. The constitutional viability of an executive order preempting state law remains uncertain, as such preemption typically requires congressional action rather than executive fiat, setting up what could become a major federalism battle in federal courts.
David Sacks: Leading the AI Policy Portfolio
The administration’s AI agenda is being spearheaded by David Sacks, whom Trump appointed as the nation’s first “AI & Crypto Czar” in December 2024. A venture capitalist, former PayPal executive, and member of the so-called “PayPal Mafia,” Sacks is guiding federal policy on artificial intelligence and cryptocurrency, working to make America the clear global leader in both emerging technologies. His appointment signaled a decisive shift toward a pro-innovation, light-touch regulatory approach that prioritizes industry collaboration over government mandates.
Sacks brings both Silicon Valley credentials and hands-on AI experience to the role, having launched Glue, an AI-powered workplace communication app, earlier in 2024. His philosophy emphasizes open ecosystems, minimal regulatory barriers for startups, and ensuring that most internet content remains available for AI training purposes under fair use principles.
The Push for Federal Preemption: A Six-Month Saga
Trump’s call represents the latest chapter in an intense legislative battle that has consumed Washington for months. In June, House Republicans initially passed a bill that would have barred states from passing or maintaining any new or existing AI laws for a full decade. Senate leaders subsequently watered down the proposal to merely block states from receiving federal tech funding if they regulated AI. As negotiations continued, the moratorium was reduced from 10 years to five years. Finally, in July, the Senate voted 99-1 to drop the pause entirely, recognizing that more consideration was needed.
Now, House Majority Leader Steve Scalise has announced that Republican leadership is actively exploring adding AI preemption language to the National Defense Authorization Act, a common legislative vehicle for controversial policy riders. One version of the bill under discussion would make AI companies exempt from state laws for up to five years if they agree to federal standards around transparency and child safety.
The Case for Federal Standards: Innovation vs. Fragmentation
The administration’s argument centers on economic competitiveness and regulatory efficiency. Leading tech companies, including OpenAI and Anthropic, have endorsed federal AI frameworks over state-level regulation, citing the impossibility of complying with 50 different regulatory regimes simultaneously. Nvidia CEO Jensen Huang has publicly argued that China’s streamlined regulation gives Beijing an advantage over the U.S. in the global AI race.
For businesses operating across multiple states, the current landscape has become increasingly untenable. Companies already face:
- Divergent definitions of “automated decision tools” and AI systems
- Conflicting notice and transparency requirements
- Different approaches to AI-assisted hiring and employee monitoring
- Expanding obligations around bias testing and documentation
- Inconsistent timelines for compliance implementation
The fragmentation creates particularly acute challenges for startups and mid-sized companies that lack the legal resources to navigate multiple overlapping compliance frameworks simultaneously.
State Regulation: A Rapidly Expanding Patchwork
Despite federal preemption efforts, state and local AI regulation continues to proliferate at an accelerating pace, including the following:
- Colorado implemented comprehensive AI legislation requiring algorithmic impact assessments and disclosure mandates for high-risk AI systems
- California has enacted security disclosure requirements for large AI developers and transparency obligations for AI-generated content
- New York City pioneered bias audit rules for automated employment decision tools
- Illinois mandates notice requirements for AI use in workplace contexts
- Virginia, having recently shifted to Democratic control, is poised to advance new AI regulation in its next legislative session
Dozens of additional “Automated Decision-Making Technology” (ADMT) bills remain pending in state legislatures nationwide, each with unique definitions, requirements, and enforcement mechanisms.
The Privacy Law Parallel: Lessons from CCPA, GDPR, and State Fragmentation
The debate over AI regulation mirrors an ongoing struggle over privacy law that offers instructive lessons. The United States has developed a similarly fractured privacy regulatory landscape, with California’s CCPA leading the way, followed by state comprehensive privacy laws in Virginia, Colorado, Connecticut, Utah, and numerous other states, each with distinct definitions, consumer rights, business obligations, and enforcement provisions.
This state-by-state privacy patchwork has created significant privacy compliance burdens for businesses operating nationally, requiring companies to:
- Maintain separate consent mechanisms for different jurisdictions
- Navigate conflicting definitions of “personal information” and “sensitive data”
- Comply with varying data subject access rights and deletion requirements
- Implement different opt-out mechanisms and privacy notice formats
- Track constantly evolving enforcement guidance from multiple state attorneys general
While federal privacy legislation has repeatedly stalled in Congress, many businesses have advocated for a single national privacy standard that would preempt state laws, similar to what the Trump administration is now proposing for AI regulation.
The interplay between privacy and AI regulation is particularly significant because AI systems fundamentally depend on data processing. Many state privacy laws already impose restrictions on automated decision-making, algorithmic profiling, and sensitive data usage that directly impact AI development and deployment. As AI regulation debates intensify, privacy considerations are likely to become increasingly central to the discussion, potentially creating momentum for comprehensive federal action on both fronts simultaneously.
Europe’s Regulatory Retreat: The EU’s Competitiveness Reckoning
Intriguingly, even as the Trump administration pushes for lighter-touch federal AI standards, the European Union, long considered the global standard-bearer for stringent tech regulation, is now reconsidering its approach amid growing concerns that excessive regulation is stifling innovation and economic growth.
In November 2025, the European Commission unveiled the “Digital Omnibus,” a package of reforms that could significantly reshape the General Data Protection Regulation (GDPR), the AI Act, and ePrivacy rules. The plan is presented as a way to simplify compliance and reduce bureaucracy for small and medium-sized companies, following a report by former Italian Prime Minister Mario Draghi warning that Europe’s complex laws are stifling innovation and holding the region back in global competition with the US and China.
The proposed changes are substantial and controversial. The Digital Omnibus introduces clarifications on the lawful use of personal data for AI training under legitimate safeguards and streamlined obligations for low-risk data processing, responding to long-standing industry concerns about legal uncertainty around AI training datasets.
European Commission Executive Vice President Henna Virkkunen met with top U.S. tech executives in May to pitch a more business-friendly Europe and highlight plans to simplify digital rules. The rollback has drawn sharp criticism from privacy advocates, with concerns that Brussels is prioritizing competitiveness over citizens’ fundamental rights.
Yet the EU’s shift reveals a broader recognition that overly burdensome regulation can hobble technological advancement and economic dynamism. The very region that exported comprehensive privacy regulation globally through the GDPR is now grappling with whether its approach has become counterproductive, a cautionary tale that supports the Trump administration’s argument for avoiding state-by-state AI regulatory fragmentation in the United States.
Opposition and Political Divisions
Not all policymakers support federal preemption of state AI laws. Florida Governor Ron DeSantis warned that overriding state authority would serve as a “subsidy to Big Tech” and “prevent states from protecting against online censorship of political speech, predatory applications that target children, violations of intellectual property rights and data center intrusions on power/water resources.”
The National Conference of State Legislatures has vocally opposed federal preemption, arguing that states must retain their traditional authority to protect residents through targeted legislation on discrimination, consumer protection, and public safety. The Senate’s July vote blocking AI preemption cited concerns that the measure could thwart attempts to implement child safety and copyright controls on emerging technology.
Progressive lawmakers like Senator Elizabeth Warren have raised concerns about potential conflicts of interest, questioning whether federal AI policy may ultimately benefit large tech companies at the expense of competition and consumer protection.
What This Means for Businesses and Legal Compliance
For companies deploying AI systems, the current regulatory uncertainty demands proactive governance regardless of how federal preemption efforts resolve:
1. Comprehensive AI Inventory and Risk Assessment
Organizations must catalog all AI tools currently in use, with particular attention to high-risk applications, including:
- Hiring, promotion, and termination systems
- Employee performance monitoring and productivity scoring
- Predictive scheduling algorithms
- Customer-facing chatbots and recommendation engines
- Fraud detection, housing and credit decisioning tools
2. Multi-Jurisdictional Compliance Strategy
Until federal standards emerge, businesses should prioritize compliance with the most stringent state requirements, particularly in California, Colorado, Illinois, New York, and Virginia. A “comply with the highest standard” approach reduces the risk of parallel state enforcement actions.
3. Bias Testing and Algorithmic Auditing
Even if federal preemption passes, workplace discrimination and civil rights protections are likely to remain enforceable at the state level. Companies should proactively engage in AI governance to establish defensible testing protocols, documented decision-making rationales, and data retention policies.
4. Vendor Contract Review
AI service agreements, including data processing agreements (DPAs), should explicitly address:
- Training data provenance and intellectual property rights
- Audit rights and ongoing monitoring obligations
- Liability allocation for discriminatory or erroneous outputs
- Adherence to recognized frameworks like NIST’s AI Risk Management Framework
- Compliance with evolving state and federal requirements
5. Cross-Functional AI Governance
Effective AI compliance requires coordination across legal, human resources, information technology, security, and business operations teams. Organizations demonstrating intentional, documented governance will be better positioned to defend against enforcement actions and demonstrate good-faith compliance efforts.
Looking Ahead: Three Possible Scenarios
As the legislative battle unfolds, three primary outcomes appear most likely:
Scenario 1: NDAA Attachment: The AI preemption language could be added to the must-pass National Defense Authorization Act, resulting in a shorter moratorium (likely five years rather than ten) with possible carve-outs for discrimination, child safety, and copyright protections. The focus would emphasize national competitiveness and security considerations.
Scenario 2: Standalone Federal AI Legislation: A narrower preemption bill could advance separately, potentially incorporating outcome-focused regulation, regulating AI impacts rather than AI tools themselves, and requiring compromise on labor market protections and civil rights enforcement.
Scenario 3: Continued State-Level Expansion: If Congress fails to act despite Trump administration pressure, expect a 2026 surge in state AI legislation modeled on California and Colorado’s frameworks, with expanding requirements across hiring, healthcare, insurance, and financial services sectors.
The Role of Legal Counsel
Navigating this evolving landscape requires sophisticated legal guidance that bridges technology law, privacy compliance, employment regulation, and administrative procedure. As both federal and state AI frameworks continue developing, businesses need counsel who can:
- Interpret emerging regulatory requirements across multiple jurisdictions and assist via RICHTPOLICY
- Conduct AI compliance assessments and risk analyses
- Draft and negotiate vendor agreements with appropriate protections
- Develop defensible AI governance programs and documentation
- Represent clients in regulatory investigations and enforcement actions
- Advise on strategic AI deployment decisions with compliance implications
At RICHT, our AI and technology practice provides comprehensive guidance on artificial intelligence compliance, algorithmic accountability, automated decision-making regulation, and the intersection of AI with privacy, marketing, and telecommunications law. Whether you’re deploying AI systems, developing AI products, or facing regulatory scrutiny, we offer the experience needed to navigate this rapidly evolving landscape.
Conclusion: The Stakes for American Innovation
President Trump’s call for federal AI standards reflects a fundamental tension in American governance: balancing innovation and competitiveness against consumer protection and civil rights enforcement. The outcome of this debate will profoundly shape not only the artificial intelligence industry but the broader digital economy for years to come.
For businesses, the message is clear: regardless of whether federal preemption succeeds, AI governance and compliance cannot wait. The companies that invest now in comprehensive AI risk management, transparent algorithmic decision-making, and proactive legal compliance will be best positioned to thrive in whatever regulatory framework ultimately emerges.
The AI race isn’t just about technology; it’s about regulatory architecture, institutional capacity, and the ability to foster innovation while maintaining public trust. As President Trump put it starkly, America’s continued leadership may depend on getting this balance right.