Overview
This one-hour, partner exclusive meeting will include presentations from Miranda Bogen (Center for Democracy and Technology) and Eliza McCullough (Partnership on AI). Following, Janet Haven (Data & Society) and Daniel Ho (Stanford Institute for Human-Centered Artificial Intelligence) will discuss the policy landscape and guide us in an open Q&A.
Too often, algorithmic systems discriminate against historically marginalized groups. In response, policymakers have called for organizations that develop and use these systems to measure and remediate discrimination. This usually requires analysis of sensitive demographic data. However, practitioners often face barriers in obtaining this necessary data or otherwise conducting measurements to identify disparities, from legal constraints to privacy concerns. Even when practitioners can collect demographic data for assessment, data subjects (particularly marginalized data subjects) face many additional harms, like the expansion of surveillance infrastructure and group misidentification. These conflicting tensions highlight some of the fundamental barriers to addressing algorithmic bias: the apparent need to collect demographic data to address discrimination, the barriers to this collection, and the harms that can stem from the collection process.
The newly published reports “Navigating Demographic Measurement for Fairness and Equity” by the Center for Democracy and Technology and “Participatory & Inclusive Demographic Data Guidelines” by Partnership on AI attempt to resolve these key tensions.
This event is exclusive to PAI Partners. Please email events@partnershiponai.org if you are a Partner and would like to attend.
Panelists
Miranda Bogen
Director, AI Governance Lab
CDT
Eliza McCullough
Program & Research Lead for Fairness, Transparency & Accountability