A research partnership between ARVA and Data & Society investigating the potential of red-teaming for empowering communities to recognize and prevent generative AI harms. ARVA’s mission is to empower communities to recognize, diagnose, and manage vulnerabilities in AI systems.
ARVA believes that red-teaming generative AI is a major emergent use case for knowledge sharing about AI vulnerabilities. It is also a major potential source of public knowledge about generative AI vulnerabilities. This is why we are proud to be community partners in the groundbreaking White House-supported DEF CON 31 Generative Red Team event co-organized by AI Village, Humane Intelligence, and Seed AI.
The surge of interest in red-teaming as an approach to discovering ethics and safety flaws in generative AI systems marks a crucial moment for AI risk management and governance. Amidst this growing interest, we see a need for investigating the place of red-teaming in the emergent ecosystem of discovery, disclosure, and mitigation of harmful flaws in generative AI systems. To this end, we partnered with Data & Society on a major research project that will examine this problem through ethnography and desk research.
This project is in part supported by a 2023-2024 Magic Grant from The Brown Institute for Media Innovation.
- Posted on:
- November 8, 2023
- 2 minute read, 235 words