New in! When grant applicants promise AI will make their projects cheaper and better, how can federal agencies sort reality from hype? Dan provides recommendations in a policy memo, published with the Federation of American Scientists, for how funders can make better bets on AI.
Thanks for reading The AI Law & Policy Update! Subscribe for free to receive new posts and support my work.
AI Law & Policy News
Good And Bad News for Prompt Artists: In a new report, The U.S. Copyright Office re-affirmed its line against extending copyright protection for AI-generated outputs. Even if the artist puts a lot of creative effort into a prompt, the thinking goes, that prompt doesn’t make you the author of the output. The prompts themselves, though, could be copyrightable (if they meet the bar for originality), as would be the use of AI outputs in human-made work.
Quick take: That family photo you touched up with AI? That meme generator you used to to see yourself as an 80-year-old? That's where things get uncertain. The Office left open the question of when “assistive uses” of AI might be copyrightable. Digging into the footnotes, the report cites approvingly to one public comment.
To see this principle in action, you can look to recent discussion of an image that was granted copyright. It’s titled:, “A Single Piece of American Cheese: An origin story.” (Seen below)
The authors took the AI generated image on the left and then used an AI-assisted inpainting tool via many iterations to create the final image. This final image, the Office suggested, is copyrightable as a compilation. However, the copyright is likely thin: the image on the left is still not copyrightable. This is similar to past work, like the comic book Zarya of the Dawn which received a similar (thin) compilation right.
This, of course contradicts the Vatican’s position, which argued that all AI-generated content created within Vatican City’s borders are owned by the Vatican.
The Imitation Game. OpenAI claims rival DeepSeek may have "inappropriately" trained on ChatGPT outputs in violation of its Terms of Service.
Quick Take: OpenAI might want to pick its battles. Enforcing terms of service against AI model training could be a costly exercise in futility. As we wrote about Peter’s work in our last newsletter, “Companies have little room to make copyright infringement claims over genAI outputs—and even the models themselves. Enforcing contract claims, too, is challenging.” Importantly, model providers will need to rely on fair use and lack of enforcement for anti-scraping terms, as their own use of data is scrutinized.
Is torrenting training data fair use? As recent discovery in the litigation against Meta reveals show ablation studies that LibGen was a useful dataset for meeting SOTA on benchmarks like MMLU. The problem is this data—likely used by most model creators—comes from a BitTorrent tracker. Meta is not unique here. A DeepSeek paper also states that they used Anna’s Archive.
On the other hand, some AI companies are going after licensing deals with content creators. Authors publishing with HarperCollins were offered $2500 per title to allow their books to be ingested by AI.
The Year of AI for Gov. OpenAI launched ChatGPT Gov, a self-hosted copy of its ChatGPT enterprise tool to help meet federal security, privacy, and compliance requirements. Already, government offices like the Air Force Research Lab are using ChatGPT enterprise for administrative tasks.
Quick Take: We already expected federal agencies to pilot and test using LLMs for administrative tasks, but the current administration is moving faster than expectations. In a staff meeting, the head of the General Services Administration’s Technology Transformation Services plans to use AI widely, including bringing AI coding agents to re-write government software. This comes at a time when every major AI player is angling for public sector contracts.
First Amendment arguments get real for AI. Character AI is arguing that its models are covered by the First Amendment, barring tort claims against them. This will bring largely academic debates about First Amendment protections for AI to a very real setting. We expect a SCOTUS to take a case like this in the coming years, perhaps even this one.
When AI Must Say "I'm Not Real".California proposed a new bill (SB243) which would require, among other things, that AI companies give minors conspicuous and repeated notice that the chatbot’s responses are artificially-generated. The bill also requires firms to report to the state the number of times young users express “suicidal ideation.”
Contract Review. Adobe added to its paid Acrobat AI assistant a feature to review contracts and provide an overview, highlighting differences across agreements. Who knew Adobe would start getting into legal tech!
Position: Evaluating Generative AI Systems is a Social Science Measurement Challenge, Hanna Wallach, Meera Desai, A. Feder Cooper, Angelina Wang, Chad Atalla, Solon Barocas, Su Lin Blodgett, Alexandra Chouldechova, Emily Corvi, P. Alex Dow, Jean Garcia-Gathright, Alexandra Olteanu, Nicholas Pangakis, Stefanie Reed, Emily Sheng, Dan Vann, Jennifer Wortman Vaughan, Matthew Vogel, Hannah Washington, Abigail Z. Jacobs
Who are we?Peter Henderson is an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public & International Affairs, where he runs the AI, Law, & Society Lab. Previously Peter received a JD-PhD from Stanford University. Dan Bateyko researches artificial intelligence and law at Cornell University in the Department of Information Science. Every once in a while, we round up news at the intersection of Law, Policy, and AI. Also… just in case, none of this is legal advice, and any views we express here are purely our own and are not those of any entity, organization, government, or other person.
Thanks for reading The AI Law & Policy Update! Subscribe for free to receive new posts and support my work.