The Law, Policy, & AI Briefing #3: A very late continuation
Hi all, welcome to the second edition of the Law, Policy, and AI Briefing. Briefings will go out intermittently, because I need to do research also. This one is very very late because… well.. research. So some of this is likely a bit old news to some of you.
Who am I? I’m a PhD (Machine Learning)-JD candidate at Stanford University, you can learn more about my research here.
What is this? The goal of this letter is to round up some interesting bits of information and events somewhere at the intersection of Law, Policy, and AI. Sometimes I will weigh in with thoughts or more in-depth summaries. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice.
Your briefing awaits below!
Law
The American Data Privacy and Protection Act (ADPPA) was introduced to the House of Representatives. In it there is a section on Algorithmic Impact Assessment and Evaluation. There were some concerns that there might be pre-emption of California state law, but that is being worked out. EPIC has a nice breakdown of how different state laws are pre-empted (or not). Brookings also has another nice explainer. Notably Senator Cantwell argues that the bill "does not adequately protect women’s reproductive information because constraints on private lawsuits will make it harder for women to sue for violations." The EFF was not particularly happy with the bill either.
EFF is disappointed by the latest draft of the American Data Privacy Protection Act, or the ADPPA, a federal comprehensive data privacy bill. While we are still digesting the 132-page version released yesterday, we have three initial objections."A memo comparing the measures prepared by three prominent nonprofits and shared with The Technology 202 argues that the federal bill’s consumer protections are equal to or better than the California law in a vast majority of areas.""It would prohibit most covered entities from using covered data in a way that discriminates on the basis of protected characteristics (such as race, gender, or sexual orientation). It would also require large data holders to conduct algorithm impact assessments. These assessments would need to describe the entity’s steps to mitigate potential harms resulting from its algorithms, among other requirements. Large data holders would be required to submit these assessments to the FTC and make them available to Congress on request."
Canada introduces the Digital Charter Implementation Act which also has an AI component.
The Act seeks to ensure that "high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias; establish[es] an AI and Data Commissioner to support the Minister of Innovation, Science and Industry in fulfilling ministerial responsibilities under the Act, including by monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate; and outlin[es] clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment."
FTC Report Warns About Using Artificial Intelligence to Combat Online Problems. I largely agree, there are a lot of problems with using AI for these purposes and the recommendations seem reasonable. To my mind, the aim of the report seems to be to throw cold water on calls for legislation requiring the use of AI in these cases, which would be certainly be problematic. I do, however, think that AI is likely necessary to tackle some of these challenges to some degree.
U.S. DOJ has reached a settlement with Meta to prevent ad-discrimination under the Fair Housing Act. There has been some criticism of this settlement. I do think this is a good thing if there is real monitoring, and will test algorithmic fairness at scale. However, the maximum penalty under the FHA seems pretty low to have a significant enforcement effect.
Under the deal, Meta must stop allowing advertisers to use the "Lookalike Audience" tool which can allow for discrimination based on protected characteristics under the Fair Housing Act. They must develop a new system by December 2022 which addresses disparities in housing ads. A third party reviewer will investigate and verify the new system to make sure it abides by the settlement terms. Meta must pay to the United States a civil penalty of **$115,054, the maximum penalty available under the Fair Housing Act.**
The UK has created an algorithmic transparency standard. As part of this, they have been regularly releasing reports on uses of AI. For example on July 7, 2022 a report was released on by the Food Standards Agency on the use of AI for enforcement prioritization. Shameless self-promotion alert: We wrote about the challenges for enforcement prioritization, and discuss food standards agencies, in our recent work Beyond Ads: Sequential Decision-Making Algorithms in Law and Public Policy. Health safety rating systems can encode some biases that can lead to feedback loops and may be worth exploring more deeply. That’s not to say that ML shouldn’t be used in this context, but transparency is important to understand and resolve underlying data and technical issues.
LinkedIn is being sued for its integration of machine learning algorithms with core products along a number of antitrust claims. I'm keeping an eye on how the antitrust+algorithms interaction plays out here.
Antitrust lawsuit filed against Amazon, alleging that its pricing algorithms were programmed to match price floors of third party sellers. If you’re creating a pricing algorithm, might be worth checking with some attorneys whether you’re increasing liability...
Judge Tosses Defamation Suit Brought By ShotSpotter Against Vice Media For Reporting On Its Shady Tactics. Though I'm not surprised this was dismissed, I'm keeping an eye on this space to see how companies selling AI respond to external audits.
Policy
Stanford HAI drops a new audit challenge to identify potentially harmful or discriminatory algorithms (and techniques on how to find these failure modes).
Feel free to reach out and hire me as faculty!!