AI Law & Policy Update: AI policy after the U.S. election
Also, OpenAI wins a preliminary motion to dismiss in one copyright lawsuit, major AI companies ramp up sales to the military, and the public sector uses of AI expand.
AI Policy After the U.S. Election
There’s no shortage of commentary on what Trump 2.0 means for AI. One of the clearest signals comes from the official Trump platform, which pledges to revoke Biden’s Executive Order on AI, a centerpiece of his AI policy. But while Trump may want to revoke the order, it’s not fully clear what takes its place. After all, the E.O. is in step with Trump’s first-term policies making AI a national priority: the Trump Administration set up AI research institutes, proposed spending on AI-related grants, and issued an Executive Order calling on agencies invest in AI R&D and training. As Politico reported last week, the Trump transition team tapped one of the authors of Trump’s E.O. to handle technology policy. The difference between the Trump campaign’s rhetoric and actions clouds the forecast of what the next four years look like: will the next term be a return to form or a departure?
What we expect is that agencies will keep exploring AI as a tool for improving government efficiency, and gaining the upper hand in geopolitical competitions. We also expect some agencies to slow down on regulation, like the FTC. But states and federalism may step in to fill the gap. In this year alone, 45 states introduced AI-related bills, and major proposals in states like California have kicked off heated debates. These proposals at the state level are coming from all sides of the political spectrum, so it seems likely that—in one form or another—they will keep coming.
Commentary Corner: AI Law & Policy News
Copyright. Penguin Random House added a line to the front matter of its published books, warning AI companies not to train on them. Whether these warnings like these, including robots.txt files and canary strings, work to deter large language providers from using content remains uncertain. Meanwhile, X modified its privacy policy to allow it to provide user data for training AI models. The AI startup Perplexity faces a copyright suit for using news content in its AI search engine responses (and attributing some “hallucinated” outputs to news providers). And OpenAI won a motion to dismiss in Raw Story Media’s copyright lawsuit against them.
Quick Take
OpenAI’s win against Raw Story Media is on one hand not surprising: the only claim in this case was a DMCA § 1202(b) claim. These claims have been dismissed in most AI copyright cases so far. The standard that has developed is one that requires exact reproductions of the material, something very unlikely to happen at scale in the AI setting (or at least that has not been shown in any of the dismissed complaints). What makes the Raw Story Media case unique is that the judge dismissed the case on constitutional standing grounds (not the 1202(b) identicality requirement). The court argued that Raw Story Media didn’t allege a concrete harm, something that some legal scholars have noted could have far-reaching implications for other copyright cases. There is, however, reason for caution in over-extrapolating. The court’s standing analysis was wrapped up in the specifics of DMCA § 1202(b) and may not generalize to other cases where actual infringement is alleged.
State-level Efforts. California’s State Civil Rights Department released new proposed rules over when AI vendors can be liable for their hiring tools. Public comments on the draft are open until November 18. In Texas, lawmakers proposed the “Texas Responsible AI Governance Act,” which sets rules for AI developers and distributors, including requiring developers to submit reports on the limitations of an AI system and deployers to conduct impact assessments.
Microsoft Bing Chat Defamation Case Goes to Arbitration. In the District Court of Maryland, a Judge granted Microsoft a stay in a case whether its Bing AI generated responses defamed and harmed a plaintiff. The court granted a motion to compel arbitration based on the terms of service that users enter into when using Bing’s service. Binding arbitration has expanded in recent years, with companies extending the reach of arbitration well beyond their products. Disney, for example, came into hot water (and eventually reversed course) when it tried to compel arbitration for a wrongful death lawsuit because the deceased had used Disney+ once (where the terms compel arbitration to settle disputes). Given this success, arbitration may be a key tool that AI companies use against tort claims.
Generative AI in Police Reports. EFF weighed in on Washington State’s Prosecutor’s Office statement that police should write police reports without AI assistance. Generative AI reports, drawn from audio transcripts of body-worn microphone recordings, have the potential to save police time in drafting reports, but have raised concerns over their reliability and efficacy. As Emma Lurie writes, “Draft One is unlikely to be a revolutionary tool. AI interventions in criminal justice— including Draft One — often fail to remedy the problems they seek to address. The first two do not preclude that the introduction of these often-limited tools from shaping the way that police function on the ground.”
AI in the Military. Reporting shows that Anthropic, OpenAI, and Meta have all began providing their AI systems for U.S. military uses in the last few months. This shift is in sharp contrast to previous terms of use for their models that prohibited such use cases. It seems likely, given reported plans for national security uses of AI by the incoming Trump administration, that these AI in the military will expand in the coming years.
Quick Take
This move isn’t the military’s first embrace of language models. Microsoft, Palantir, and ScaleAI have penned deals for the military to use their AI systems. But AI advances and Trump’s reelection point to a bigger role for LLMs in the military. For military leaders, the upcoming challenge will be recognizing when an AI tool is ready for use and when it is so unreliable as to be dangerous. Even for lower-stakes, back-office tasks, picking the right AI tool will be a case-by-case call. For example, the Department of Defense pulled language models into one tool for searching policy documents with success. But another recent DoD effort to declassify documents with AI explicitly chose older, but more explainable AI models over LLMs. And in scenarios with risk to human life, AI can make matters worse: Max Lamparth and Jacquelyn Schneider warn that LLMs can be unpredictable and fail to reflect complex human decision-making. Yet, as Marietje Schaake points out, there are few strong binding regulations for AI in military settings—though there are some guidelines, like DoD Directive 3000.09.
AI for public sector and public good. Anthropic rolls out Archibot, an effort to refine search of the European Parliament's legislative documents. Meanwhile, Princeton’s AI, Law, & Society lab—in a partnership with Stanford’s RegLab— built an AI system (on top of Mistral’s 7B open-weight model) to identify over 7,500 racially restrictive covenants in Santa Clara County and helped the county remove them from land deeds.
Tracking AI law & policy. A team of scholars from Georgetown CSET and Purdue's Governance and Responsible AI Lab published AGORA, an archive of AI-focused laws and policies.
Who are we? Peter Henderson is an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public & International Affairs, where he runs the AI, Law, & Society Lab. Previously Peter received a JD-PhD from Stanford University. Dan Bateyko researches artificial intelligence and law at Cornell University in the Department of Information Science. Every once in a while, we round up news at the intersection of Law, Policy, and AI. Also… just in case, none of this is legal advice, and any views we express here are purely our own and are not those of any entity, organization, government, or other person.