The Law, Policy, & AI Briefing #2: A real exchange about AI super-resolution in court, contract attorneys are monitored by AI (no surprise, it's probably biased), does AI help autocracies, and more!
Your regular briefing on the intersection of law, policy, and artificial intelligence.
Hi all, welcome to the second edition of the Law, Policy, and AI Briefing. Briefings will go out roughly once a week – though probably not every week, because I need to do research also.
Who am I? I’m a PhD (Machine Learning)-JD candidate at Stanford University, you can learn more about my research here.
What is this? The goal of this letter is to round up some interesting bits of information and events somewhere at the intersection of Law, Policy, and AI. Sometimes I will weigh in with thoughts or more in-depth summaries. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice.
Your briefing awaits below!
Law
Super-resolution, deepfakes, and other neural network-based processing have the ability to manipulate images so that they are not true to the original source. In a real courtroom exchange, Kyle Rittenhouse’s attorney tries to make an argument that Apple’s pinch-to-zoom feature distorts the image so that it cannot be used as evidence. (But it would seem that pinch-to-zoom doesn’t use any neural net to fill in pixels.) As AI is introduced to everyday products, we run the risk of requiring specialized equipment in courtrooms to provide the original image — or run the risk of battles-of-the-experts duking it out over what’s a true authentic image. So it’s probably a good idea for companies to provide options to access unaltered content for these sorts of situations.
Contract attorneys are being monitored by AI to make sure they’re doing their jobs remotely. The system monitors activity and flags if it seems like the attorney isn’t working diligently on their task. And it has the same problem as the many other places where this sort of software has been deployed: it fails for people with dark skin. To my mind, using this software isn’t a good idea at all — ethically, pro-socially, or legally. It also seems like it might have labor law and anti-discrimination law implications, but we’ll see how it plays out in courts. Here’s a quote that stuck out:
“Several contract attorneys said they worried that their performance ratings, and potential future employability, could suffer solely based on the color of their skin. Loetitia McMillion, a contract attorney in Brooklyn who is Black, said she’d started wearing her hair down or pushing her face closer to the screen in hopes the system would stop forcing her offline.”
If you’re interested in learning more about the problems with monitoring software, you can read a new article which dives deep into a similar type of system: emotion recognition AI used to monitor their employees.
The White House is putting out a call for feedback on a bill of rights for an automated society. You can dial in and join this effort with White House OSTP. Ideally this bill of rights would prevent intrusive and potentially discriminatory uses of AI (see, e.g., above).
Check out the interesting talks from a conference on the state of AI in the practice of law. [Shameless plug: If you’re interested in this subject, you might want to check out section 3.2 of our paper, “On the Opportunities and Risks of Foundation Models,” There, we write about the use of foundation models in legal contexts.]
Legal scholars propose a right to contest an AI’s decision in this new Columbia Law Review article. So if you’re concerned that an AI system unfairly marked your performance as terrible at a job, this article would advocate for a system to contest such decisions.
In U.S. prisons, natural language processing is being used to monitor prisoner phone calls. And it’s been going on for a while. What are the prisons looking for? Criminal activity, gang relationships, Covid infections/symptoms, instances of self-harm, and even positive comments about the prison to help fight lawsuits. As you can imagine, there are many potential problems with these use cases, including privacy concerns, the risk of falsely labeling someone a gang member, etc. Can this be challenged in court? Prisoners don’t have a right to privacy for telephone calls in most U.S. states, so it’s certainly an uphill battle. See, e.g., People v. Diaz, 33 N.Y.3d 92, 122 N.E.3d 61 (NY 2019).
In Calhoun County, Alabama, prison authorities used Verus to identify phone calls in which prisoners vouched for the cleanliness of the facility, looking for potential ammunition to fight lawsuits, email records show.
As part of an emailed sales pitch to the jail in Cook County, Illinois, LEO’s chief operating officer, James Sexton, highlighted the Alabama case as an example of the system’s potential uses.
“(The) sheriff believes (the calls) will help him fend off pending liability via civil action from inmates and activists,” he wrote.
The NYC Council passed the first U.S. Law regarding fairness of AI-based hiring tools. But some argue that it doesn’t go far enough. For example, critics argue that the bill only requires companies to audit for discrimination on the basis of race or gender (ignoring discrimination based on other characteristics, like age or disability).
“This bill would require that a bias audit be conducted on an automated employment decision tool prior to the use of said tool. The bill would also require that candidates or employees that reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion, as well as, be notified about the job qualifications and characteristics that will be used by the automated employment decision tool. Violations of the provisions of the bill would be subject to a civil penalty.”
Two law review articles discuss opacity of black box machine learning systems. One law review article suggests that there should be fewer restrictions on opening the black box — open ML systems can help open science and innovation. Another suggests a more nuanced approach where “[t]he degree to which legal opacity should be limited or disincentivized depends on the specific sector and transparency goals of specific AI technologies, technologies which may dramatically affect people’s lives or may simply be introduced for convenience.” This discussion has implications for how we think about model releases and terms of use, as well as policies for forcing transparency with respect to AI systems.
A new law review article argues that “the Fourth Amendment imposes significant limits on the preservation of Internet account contents.” This new interpretation of the Fourth Amendment would mean that the government can’t just give a blanket order for a company to preserve your data in case law enforcement needs it.
Preservation triggers a Fourth Amendment seizure because the provider, acting as the government’s agent, takes away the account holder’s control of the account. To be constitutionally reasonable, the initial act of preservation must ordinarily be justified by probable cause—and at the very least, in uncommon cases, by reasonable suspicion. The government can continue to use the Internet preservation statute in a limited way, such as to freeze an account while investigators draft a proper warrant application. But the current practice, in which investigators order the preservation of accounts with no particularized suspicion, violates the Fourth Amendment.
Policy & Society
The European Commission proposes a common European data space for cultural heritage. This might be an interesting new data source for building cross-cultural machine learning models and perspectives (within Europe). Though it is not clear exactly what kind of data will be there.
Explainable AI is often described in policy circles as being a necessary and sufficient component for deploying a model. Among many recent works challenging this notion, a recent article addresses the problem with the reliance on explainability in medical settings.
We first show that autocrats benefit from AI: local unrest leads to greater government procurement of facial recognition AI, and increased AI procurement suppresses subsequent unrest. We then show that AI innovation benefits from autocrats’ suppression of unrest: the contracted AI firms innovate more both for the government and commercial markets. Taken together, these results suggest the possibility of sustained AI innovation under the Chinese regime: AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.
Facebook removes fine-grained ad targeting for certain types of categories (e.g., health causes, sexual orientation, religious practices and groups).
CSET puts out a policy memo describing how the U.S. can stay competitive in AI.