AI Terms of Service: Tricks of The Light?
Plus AI in the patent office, open source AI supply chain attacks, and ChatGPT in the courts
New in! Peter and co-author Mark Lemley have a new piece, The Mirage of Artificial Intelligence Terms of Use Restrictions about how enforcing Terms of Use on model outputs and weights is an uphill battle for AI companies. As Peter and Mark argue “AI terms of service are built on a house of sand.” Companies have little room to make copyright infringement claims over genAI outputs—and even the models themselves. Enforcing contract claims, too, is challenging. Yet from a policy perspective, we may want some narrowly worded policies to have teeth, particularly those around responsible use. So what then? Peter and Mark argue that legislation, rather than Terms of Service, is the better avenue to achieve policy goals, not least for reasons of political process and public oversight.
AI Law & Policy News
Copyright litigation progress: OpenAI lost a motion to dismiss for a DMCA § 1202(b)(1) claim against Intercept Media, though it successfully dismissed the DMCA § 1202(b)(3) claim. This follows its win against Raw Story Media that we previously covered on similar DMCA § 1202(b) claims.
Quick Take: Peter provided some input on a Bloomberg Law piece on why this could be one to watch. Plaintiffs dug deep on the research behind GPT-3 to find the tool (Dragnet) that was used to download web data. The tool itself leaves out CMI during the download, which was plausibly enough to get to discovery on a DMCA § 1202(b)(1) claim. Plaintiffs will still need to get through a double scienter requirement to win, which seems unlikely. However, this may be a formula for getting to discovery on these claims going forward. OpenAI is now battling discovery in other cases, like the NYTimes for example. Where NYTimes accused OpenAI of deleting their findings from a secure computer during the discovery process. At the same time, OpenAI shot back, requesting information on NYTimes own use of data for training AI, framing NYTimes as hypocritical in its litigation. A judge denied this discovery request.
Character AI sued (again): Character AI has been sued again (along with Google and C.AI’s founder). This time, parents sue C.AI for offering an unsafe product to minors. In one case, parents argue that the bot told their child that “killing his parents might be a reasonable response [to them limiting his screentime].” Another parent stated that the C.AI bot engaged in hypersexualized dialogs with a minor. Another said the bot provided their child with detailed descriptions of how to commit self harm. You’ll recall that previously, Character AI was sued for wrongful death.
Quick Take: This is a really important litigation to follow. Whether it succeeds or not, protecting minors from technology has increasing bipartisan support and may result in a bill targeting these risks. In technical research, these sorts of problems likely need far more attention than they receive—compared to other types of safety risks. We’ve regularly pointed out how customized models lose their guardrails easily, and our policy brief even discussed a hypothetical scenario where a customized K-12 education chatbot loses its safety guardrails after customization. Character AI’s system allows for the creation of customized chatbots potentially implicating this exact risk. And in general, protecting vulnerable users requires care and continued investments in alignment. This is especially challenging for long dialogs, where even models with significant investment in alignment falter. Google’s Gemini, for example, recently told a user to “please die” after a long dialog where a student was using Gemini to help with their homework.
Reliance on Support Chatbots: A federal court dismissed a promissory estoppel claim against Substack where a user sued after the company's support chatbot promised to respond to all complaints but Substack never did. The court found the chatbot's responses weren’t specific enough about how or when Substack would respond, and the plaintiff couldn't show substantial detrimental reliance on the chatbot's assurances.
Quick Take: While these plaintiffs lost, in the future there might be a claim where there was a strong fact pattern showing detrimental reliance in other contexts. For example, Air Canada was on the hook for a discount promised by its chatbot.
GenAI in the Patent Office. The U.S. Patent and Trademark Office (USPTO) banned the use of external GenAI tools last year, according to a FOIA’d memo obtained by Wired.
Quick Take: Banning external tools is the right move, but there’s opportunity to scale up the use of internal AI. AI search is not new to USPTO. The office’s Patent End-to-End search tool includes AI functionality and, as Wired reports, the Patent Office just inked a deal with Accenture to build more tools for examiners to search databases faster and more accurately. But text search is not the only use case. Drafting office actions, breaking down diagrams and images, and basic formatting checks are potential uses as well. A human-centered design approach—where the agency surveys and collaborates with patent examiners to identify cumbersome processes— could be particularly effective.
Other AI uses cases in Courts and Governments. Buenos Aires courts have begun using ChatGPT to draft legal rulings, reports Rest of World. Following through with the 2022 CHIPS Act, the NSF’s National Secure Data Service Demonstration proposed an AI chatbot to answer questions about public agency data. According to the contract award, the chatbot aims to improve on citizens searching Google or emailing federal staff for answers.
The Data Labeling Market Grows. Uber’s in the Data Labeling business now: Bloomberg reports that the ridesharing app has expanded into datalabeling offerings for companies training models. Uber’s supported annotations appear to support a wide variety of labeling tasks across text, audio, video, and maps.
Security for Open Source Models. BleepingComputer reported on a supply chain attack on an open source computer vision and AI tool, Ultralytics’ YOLO11, causing users to run cryptomining software. The attacker took advantage of an automated build and release workflow—a cautionary tale of the many ways malicious code can be introduced. The problem has since been fixed.
Legislation and Policymaking. The Department of Homeland Security (DHS) released its framework for the use of AI in critical infrastructure. Congressional representatives Ted Lieu and Kevin Kiley introduced legislation to increase penalties for committing financial fraud using AI.
Who are we? Peter Henderson is an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public & International Affairs, where he runs the AI, Law, & Society Lab. Previously Peter received a JD-PhD from Stanford University. Dan Bateyko researches artificial intelligence and law at Cornell University in the Department of Information Science. Every once in a while, we round up news at the intersection of Law, Policy, and AI. Also… just in case, none of this is legal advice, and any views we express here are purely our own and are not those of any entity, organization, government, or other person.