Introduction

Introduction

The AI field is rife with examples of harm meted out on communities of color. Demographically, the folks who design these AI technologies do not represent the communities that the technologies affect. While part of the challenge is the candidate pipeline, or recruiting diverse candidates to tech organizations, an overlooked challenge is attrition, or why diverse workers leave AI teams or organizations once there. The following study is the first of its kind in investigating some of the reasons why minoritized folks on AI teams leave these teams and organizations, what this has to do with the culture of these teams and organizations, and what can be done to reduce this attrition and make these teams and organizations more inclusive. When these organizations become more inclusive it results in AI that is designed with a more representative population in mind.

Why Study Attrition of Minoritized Workers in AI?

Why Study Attrition of Minoritized Workers in AI?

1. AI has repeatedly exhibited bias in several areas, and part of the problem could be the predominantly White and male makeup of those working in the field.

AI algorithms have long faced criticisms for propagating the biases of the people who design them. Joy Buolamwini and Timnit Gebru’s seminal work on AI bias in facial recognition showed that facial recognition algorithms were most likely to misclassify the faces of Black women. Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR. Work by Dora Zhao and colleagues also recently showed that algorithms showed bias in captioning faces of lighter-skinned individuals compared to darker-skinned individuals, and that this bias seemed to be starker than older classification models. Zhao, D., Wang, A., & Russakovsky, O. (2021). Understanding and Evaluating Racial Biases in Image Captioning. arXiv preprint arXiv:2106.08503.

Beyond the harmful applications of this technology in things such as surveillance, Feldstein, S. (2021). The Global Expansion of AI Surveillance. Carnegie Endowment for International Peace. Retrieved 17 September 2019, from https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847. biased financial practices, Firth, N. (2021). Apple Card is being investigated over claims it gives women lower credit limits. MIT Technology Review. Retrieved 23 November 2021, from https://www.technologyreview.com/2019/11/11/131983/apple-card-is-being-investigated-over-claims-it-gives-women-lower-credit-limits/. and hiring, the people designing these algorithms are homogeneous, especially in terms of race and gender. Howard, A., & Isbell, C. (2021). Diversity in AI: The Invisible Men and Women. MIT Sloan Management Review. Retrieved 21 September 2020, from https://sloanreview.mit.edu/article/diversity-in-ai-the-invisible-men-and-women/. Given the increasingly pervasive role of AI technologies in the lives of people all around the world, particularly marginalized people, there has rightfully been increasing concern about the people who are designing this technology and whether they represent the people who are being affected. AI Now. (2019). Discriminating Systems: Gender, Race, and Power in AI (Ebook). Retrieved 23 November 2021.

2. AI algorithms have had measurable effects on how we conduct day-to-day business and have far-reaching effects, including on how we recruit people to companies, how we conduct educational practices, and how we use technology in several other facets of our daily life.

The pandemic has accelerated the widespread integration of AI technology into our lives, with such systems seeing use in remote education Swauger, S. (2021). Opinion | What’s worse than remote school? Remote test-taking with AI proctors. NBC News. Retrieved 7 November 2020, from https://www.nbcnews.com/think/opinion/remote-testing-monitored-ai-failing-students-forced-undergo-it-ncna1246769 and work environments. Belani, G. (2021). AI Paving the Way for Remote Work | IEEE Computer Society. Computer.org. Retrieved 26 July 2021, from https://www.computer.org/publications/tech-news/trends/remote-working-easier-with-ai AI algorithms have also been increasingly used in the banking industry and other high-impact decision-making contexts.

As the adoption of AI technologies has quickly increased over the past decade, it is both urgent and crucial to ensure that the people designing these technologies do not also create products with bias that inflict harm upon minoritized communities.

3. The specific reasons minoritized folks leave AI teams requires deeper investigation.

People on AI teams, like those in the tech industry more generally, are predominantly White and male. Although organizations have often blamed recruitment and the “pipeline problem” for the lack of diversity on teams, this does not fully explain the reasons why people who belong to minoritized identities often leave their teams once they are working for these organizations. This is a cyclic process where teams that are not diverse do not attract and keep minoritized individuals, so these teams do not become more diverse and accepting places.

Teams that are not diverse do not attract and keep minoritized individuals.

A report from the Kapor Center Scott, A., Kapor Klein, F., and Onovakpuri, U. (2017). Tech Leavers Study (Ebook). Retrieved 24 November 2021, from https://www.kaporcenter.org/wp-content/uploads/2017/08/TechLeavers2017.pdf outlined some of these issues in the tech ecosystem more broadly, citing unfair and opaque promotion practices, incidents of bias, and lack of clear paths for growth and promotion as several reasons why minoritized folks leave the tech field at a higher rate than those belonging to dominant groups. This report will probe these issues more deeply, and investigate what specifically about the culture of AI teams may contribute to people staying or leaving. Finally, this study investigates potential ways to make these teams more inclusive.