The Law, Policy, & AI Briefing #4
FTC takes action, China publishes algorithm descriptions, and content scanning makes the news (again).
Hi all, welcome to the fourth edition of the Law, Policy, and AI Briefing. Briefings will go out intermittently, because I need to do research also.
Who am I? I’m a PhD (Machine Learning)-JD candidate at Stanford University, you can learn more about my research here.
What is this? The goal of this letter is to round up some interesting bits of information and events somewhere at the intersection of Law, Policy, and AI. Sometimes I will weigh in with thoughts or more in-depth summaries. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice. And any views I express here are purely my own and are not those of any entity, organization, government, or other person.
Your briefing awaits below!
Law
The California legislature wants to prevent regulatory sandboxes that would allow for new experimental ways to deliver legal services. This would stymie efforts to reduce the costs of access to justice. How does it relate to AI? Well, non-lawyers cannot practice law, which means that AI tools can’t necessarily directly assist users with legal tasks. While maybe that’s a good thing for more complex settings, overly restrictive regulatory regimes can prevent innovative new approaches for access-to-justice from breaking through in simpler settings.
A news story about Google’s automated filtering mechanisms has been making the rounds. A father took a picture of his baby’s medical condition at the request of a doctor for a virtual visit. Google scanned this with ML algorithms, identified it as CSAM, locked the account, and notified authorities. There are lots of discussions about whether ML should be used in this way due to the potential massive harms from false positives. For example, the EFF weighed in, noting that general monitoring is not the solution to filter CSAM. This also relates back to recent discussions about the EARN IT Act, a bill that would raise the bar for companies, potentially leading to monitoring of content on-device.
The FTC is exploring new ways to regulate algorithms, including using FTC Section 5 authority. The agency is seeking comment on a number of issues. Some interesting discussion on other FTC actions here. And there’s a nice law review article on the matter here by Andrew Selbst and Solon Barocas. Notably, Lina Kahn and Rohit Chopra wrote about using Section 5 authority more extensively in Columbia Law Review in the past. And another law review article by Aneesa Mazumdar would use Section 5 authority to prevent algorithms from colluding with one another.
In China, tech companies must share information about uses of algorithms. The government has published this information during a time of increasing efforts to regulate algorithmic systems.
No, your AI can’t be an inventor on a patent in the United States says the Federal Circuit.
The sole issue on appeal is whether an AI software system can be an “inventor” under the Patent Act. In resolving disputes of statutory interpretation, we “begin[] with the statutory text, and end[] there as well if the text is unambiguous.” BedRoc Ltd. v. United States, 541 U.S. 176, 183 (2004). Here, there is no ambiguity: the Patent Act requires that inventors must be natural persons; that is, human beings.
As a new marketplace for prompts opens up, some interesting food for thought on whether prompts are copyrightable:
Another IP question, do ML systems indicate when something has become generic. Notably, legal scholars have written in the past about how using Google can be a shortcut to analyzing trademark distinctiveness.
I really hate to say it but this is absolutely brilliant marketing. Trade dress reflected in machine learning algos as evidence of market leadership & consumer sentiment. But how long until AI becomes a vector of genericization?Full page image generated advertising in todays NYT https://t.co/FHqwu6H6EeYou don't need a metaverse strategy @MarkGhuneimA lawsuit has been filed against Meta alleging that OnlyFans abused its content filtering algorithms to squash competition. This one is interesting since it involves the alleged exploitation of content monitoring mechanisms for anti-competitive action by a third party. Note: Meta claims none of the allegations are true and if they back this up this case is likely to be dismissed quickly.
Policy and Legal Academia
The CHIPS Act was passed. What does it mean for AI research and AI policy? Stanford HAI writes about it:
This Article is the first to examine and compare a number of recently proposed and enacted AI risk regulation regimes. It asks whether risk regulation is, in fact, the right approach. It closes with suggestions for addressing two types of shortcomings: failures to consider other tools in the risk regulation toolkit (including conditional licensing, liability, and design mandates), and shortcomings that stem from the nature of risk regulation itself (including the inherent difficulties of non-quantifiable harms, and the dearth of mechanisms for public or stakeholder input).
If You Think AI Won't Eclipse Humanity, You're Probably Just a Human
Building machines that can replicate human thinking and behavior has fascinated people for hundreds of years. Stories about robots date from ancient history through da Vinci to the present. Whether designed to save labor or lives, to provide companionship or protection, loyal, capable, productive machines are a dream of humanity. The modern manifestation of using human-like technology to advance social interests is artificial intelligence (AI). The continuing development of AI is inevitable and its relevance to national security will continue to grow.
Hi Peter, would you be interested in doing a guest post on A.I. Supremacy? You'd be more than welcome. https://aisupremacy.substack.com/p/guest-posts-on-ai-supreamcy