AI, Law, & Policy Update: From Copyright Disclosures to Privacy Protections
A proposed bill would require disclosure of copyrighted training data, Maryland passes major privacy bills, a judge blocks AI-enhanced video evidence, and more!
Law
Federal Agencies Vow to Enforce Anti-Discrimination Laws in Automated Systems. This joint statement from CFPB, DOJ, EEOC, and FTC emphasizes that existing laws against discrimination and unfair practices apply to the use of automated systems, including AI, and note recent actions on their part to begin addressing potentially problematic deployments of AI.
Maryland passed two major privacy bills: the Maryland Online Data Privacy Act, focusing on the collection and sale of private data, and the Maryland Kids Code, providing online platforms from tracking minors under 18 or using potentially manipulative techniques on minors—like excessive notifications and auto-playing videos to keep them on the platform. Expect a First Amendment challenge on the latter one.
A Washington state judge blocked the use of AI-enhanced video as evidence in a murder case, potentially the first such ruling in a U.S. criminal court. The judge found the AI technology relied on "opaque methods" and could lead to a "confusion of the issues" for the jury. Lawyers for the defendant had sought to introduce the AI-enhanced cellphone video, but prosecutors argued it did not accurately represent the original footage.
Rep. Schiff introduces the “Generative AI Copyright Disclosure Act of 2024.” The bill would require creators/modifiers of Generative AI training datasets to "submit to the Register a notice" detailing "any copyrighted works used" and the dataset's URL (if publicly available). However, to my mind, the bill needs some work. It is both overly broad (creating a reporting requirement for most training runs where the model becomes available), and overly narrow (not actually specifying what information would satisfy the reporting requirement). This risks creating a massive administrative overhead without yielding useful information. It even uses the term "retraining the dataset," which is not a technical term (you retrain a model not a dataset).
Policy
The Department of Justice's Computer Crime and Intellectual Property Section (CCIPS) weighs in on proposed DMCA exemptions for security research on generative AI models, arguing the exemption should be broad enough to cover research into harmful biases and outputs beyond just security vulnerabilities. The letter cites our comment to the Copyright Office based on our recent work suggesting safe harbors for independent AI evaluation.
The “Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence” was released, summarizing many current challenging issues of AI use in the legal system emphasizing education of lawyers as an immediate direction. One takeaway: “Based on current case law, AI programs can direct clients to the forms they need to fill out. However, these programs may not give any advice as to the substance of the client’s answers because that would be replacing the work of a human lawyer.”
The Canadian government proposed a $2.4 billion spending package on AI, including $50 million for a new Canadian AI Safety Institute. This comes at a time when the US and UK AI Safety Institutes announce a partnership.
The German Federal Office for Information Security (BIS) published a report on the “Generative AI Models - Opportunities and Risks for Industry and Authorities.”
Omidyar Network, Ford Foundation, and Nathan Cummings Foundation have purchased Anthropic shares, explicitly citing recent OpenAI governance failures and noting that they “are hopeful that having mission-aligned investors—even as a small portion of the shareholders—will help protect and reinforce the safety and other mission-driven priorities of Anthropic’s work.”
Who am I? I’m an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public & International Affairs. Previously I received a JD-PhD from Stanford University. You can learn more about my research here. Every once in a while, I round up news at the intersection of Law, Policy, and AI. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice, and any views I express here are purely my own and are not those of any entity, organization, government, or other person.