Taking Algorithms to Court, Empowering Communities to Enact Legal Accountability
RightsCon 2022 (Speaker and Co-host)
By Borhane Blili-Hamelin in workshop
June 10, 2022
Abstract
A RightsCon 2022 workshop from Accountability Case Labs about the place of the courts in the algorithmic accountability space.
Date
June 10, 2022
Time
9:15 AM – 10:15 AM
Location
Online
Event
Co-hosted with Ranjit Singh, Jillian Powers, and Gina Helfrich
Courts are a vital organ of algorithmic & AI accountability. They are the site where legal accountability becomes a reality, where existing law is turned into actionable legal protections against and legal redress for decisions that led to algorithmic harms. However, the diffuse, complex nature of algorithmic harms makes it immensely challenging to turn specific instances of harm into actionable legal claims. What are the barriers to legal accountability for decisions around algorithms? How can we better empower communities to navigate the hurdles to actionable legal protections against decisions that led to algorithmic harms? Our session aims to raise awareness about pathways to legal accountability for algorithmic harms, and to empower participants to help their own communities take algorithms to court.
Our session drew on Accountability Case Labs’ approach to case study-based AI accountability workshops. We began with a short presentation sharing insights from Science and Technology studies into algorithmic accountability and the mechanisms through which adversarial courts negotiate the legal standing of algorithmic harms, such as court decisions about standing, admissible expert testimony, and precedents. We then considered a Frye motion about ShotSpotter evidence, and invite participants to examine stakeholders and identify pathways and barriers to legal accountability. From there, participants were invited to collaborate on identifying case studies that speak to the barriers of their own communities, and on identifying opportunities to support their own communities in realizing legal accountability for decisions around algorithms.