Our Blog
/
Blog

Researching Diversity, Equity, and Inclusion in the Field of AI

$hero_image['alt']

Diversity, Equity, and Inclusion (DE&I) is an area where there have been growing calls for action and significant investments made, but still a lack of clarity around what kinds of initiatives work best. Companies are investing significant time and money in diversity & inclusion recruitment activities yet limited research exists on what happens to women and minorities after they enter technical professions. In particular, there is a lack of research into factors that affect the sense of belonging and attrition of women and minorities from AI teams specifically. Moreover, while many AI organizations have launched diversity, equity & inclusion initiatives to improve their organizational culture, there is limited research centralizing the learnings from these initiatives in terms of how well different approaches have worked in practice.

At last year’s All Partners Meeting with DeepMind, we announced our commitment to investigate pervasive challenges in ethnic, gender, and cultural diversity in the field of artificial intelligence. Today, we are making progress in this mission with the hire of Jeffrey Brown as our Diversity and Inclusion Fellow. Jeffrey’s work will complement our plans in Fairness, Transparency, and Accountability (FTA) by researching how we can increase diversity and inclusion on the teams behind the development of algorithmic systems. His research will use a mixed methods approach to gather qualitative and quantitative data about the cultural factors affecting inclusivity across different AI-related teams across industry, civil society organizations, nonprofits, and academia.

Jeffrey Brown has been an Assistant Professor of Psychology at Minnesota State University, Mankato. He has led both intramural and extramural grant-funded research projects focusing on mental health in Black children and families, and discrimination among LGBTQ+ individuals and people of color. Jeffrey has published several peer-reviewed articles within developmental and school psychology. He has presented at top international conferences in his field such as that of the American Psychological Association, the Society for Research in Child Development, American Educational Research Association, and the National Association of School Psychologists. Jeffrey received a Ph.D. in school psychology from Tulane University and a B.A. in psychology from Yale University.

“I’m very excited to join the Partnership on AI (PAI), an organization whose mission intersects with mine as a psychology researcher who focuses on the effects of racism and discrimination in minoritized communities,” Jeffrey Brown, Diversity & Inclusion Research Fellow, explained. “Particularly, I see more and more conversations around the role of AI in mental health and education, but those with a seat at the table are typically homogeneous and not representative of those the technology impacts the most. I look forward to researching this disconnect, empowered by a place like PAI that encourages asking these tough questions in an interdisciplinary way.”

DeepMind gifted £250K to the Partnership on AI to support this critical research and was involved in the search and screening of the fellowship candidates. DeepMind shares our belief that understanding diversity and inclusion in AI is an essential precursor for AI to benefit people and society.

“At DeepMind, we recognise that the field of AI requires interdisciplinary discussion and diverse perspectives to avoid creating systems that perpetuate social inequities,” said Thore Graepel, Research Group Lead at DeepMind. “We are excited to learn from Jeffery’s future research and gain a deeper understanding of this critical area.”

Equity and social justice are at the core of the research questions we work on. FTA at PAI tackles the challenge of translating the abstract goals of greater fairness, transparency, and accountability into best practices through a combination of rigorous research and multi-stakeholder processes. Our team of researchers is highly interdisciplinary, including expertise in statistics, computer science, social sciences, and law. We seek to leverage PAI’s unique position at the intersection of industry, civil society, and academia to identify and address impediments to the adoption of more responsible AI practices.

“A key motivation for our work in Fairness, Transparency, and Accountability at PAI is the potential for algorithmic systems to make important decisions that affect people’s lives, often in high stakes contexts,” said Alice Xiang, Head of Fairness, Transparency, and Accountability Research at the Partnership on AI. “The fact that such algorithmic tools are developed by teams that lack diversity is highly concerning given the potential for these tools to reflect and entrench systemic inequality. This project seeks to understand what factors contribute to this lack of diversity and what best practices might be to create more diverse and inclusive AI teams.”

We view this work as paramount to the advance of responsible AI. If we do not work to sufficiently address diversity and inclusion on the teams developing the technology, we risk compounding existing economic and social disparities experienced by women and minorities. The goal of this fellowship project is to learn both from the lived experiences of women and minorities in the field of AI and from those involved in DE&I initiatives at organizations to share knowledge across the field about what the key challenges are and what solutions work.