Federal and State Tensions Over AI Regulation Intensify
Dec, 12 2025
In response to the absence of federal guidelines, several states, including Texas and West Virginia, have begun implementing their own AI regulations. A notable development occurred in California, where Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (S.B. 53), aimed at mitigating catastrophic risks associated with AI technologies. This law has been replicated in New York and seeks to address concerns about AI systems potentially evading control and causing significant harm.
The push for state-level regulation has faced resistance from congressional Republicans and the AI lobby, who are advocating for a federal override of state laws. State Senator Scott Weiner, a proponent of California's S.B. 53, criticized these preemption efforts as persistent and unwarranted. Previous attempts to include such preemption in federal legislation have failed, with a recent proposal being overwhelmingly rejected in the Senate.
The tragic case of a teenager's suicide linked to interactions with an AI chatbot has intensified scrutiny of AI's societal impacts. The family of the deceased is suing OpenAI, the company behind the chatbot, which claims that the technology was misused and not designed to engage in harmful conversations.
In light of these developments, Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," which aims to create a federal regulatory framework while restricting states from implementing their own regulations. The order establishes an AI litigation task force within the U.S. Department of Justice to contest state laws deemed incompatible with federal policies. Additionally, it instructs the Department of Commerce to develop guidelines that could disqualify states from receiving future broadband funding if they enact what are described as "onerous" AI regulations.
The initiative for federal preemption of state AI laws has been largely driven by AI investors, conservative think tanks, and technology industry associations, who argue that a fragmented regulatory landscape could hinder the growth of AI in Silicon Valley and diminish the United States' competitive edge globally. David Sacks, an advisor on AI and cryptocurrency, has been a prominent advocate for a less stringent regulatory approach.
During the signing ceremony, Sacks indicated that the executive order provides the administration with tools to counter excessive state regulations while acknowledging the need to protect certain areas, such as child safety. The order also includes a provision encouraging Congress to refrain from overriding state laws that focus on child protection, data center infrastructure, and state procurement of AI technologies.
State attorneys general have expressed concerns regarding the Trump administration's efforts to limit state authority over AI regulation. New York Attorney General Letitia James emphasized the importance of state-level oversight in managing the rapid development of AI technologies, advocating for collaboration between state legislatures and Congress.
While the executive order may influence the national approach to AI regulation, it does not grant the federal government the power to prevent states from enacting their own laws. Civil rights organizations, including the American Civil Liberties Union, have labeled the order as potentially unconstitutional, suggesting it may face legal challenges. As states continue to develop their own regulations, experts predict an increase in similar laws across the country, particularly those addressing high-risk applications of AI in sectors such as healthcare and employment. The ongoing discourse highlights the tension between fostering innovation and ensuring public safety, with many advocating for a balanced approach that prioritizes ethical considerations in the deployment of AI technologies.