AI and Job Quality

Insights from Frontline Workers

PAI Staff

Executive Summary

Based on an international study of on-the-job experiences with AI, this report draws from workers’ insights to point the way toward a better future for workplace AI. In addition to identifying common themes among workers’ stories, it provides guidance for key stakeholders who want to make a positive impact. These opportunities for impact can be downloaded individually as audience-specific summaries below.

Opportunities for impact for:

Across industries and around the world, AI is changing work. In the coming years, this rapidly advancing technology has the potential to fundamentally reshape humanity’s relationship with labor. As highlighted by previous Partnership on AI (PAI) research, however, the development and deployment of workplace AI often lacks input from an essential group of experts: the people who directly interact with these systems in their jobs.

Bringing the perspectives of workers into this conversation is both a moral and pragmatic imperative. Despite the direct impact of workplace AI on them, workers rarely have direct influence in AI’s creation or decisions about its implementation. This neglect raises clear concerns about unforeseen or overlooked negative impacts on workers. It also undermines the optimal use of AI from a corporate perspective.

This PAI report, based on an international study of on-the-job experiences with AI, seeks to address this gap. Through journals and interviews, workers in India, sub-Saharan Africa, and the United States shared their stories about workplace AI. From their reflections, PAI identified five common themes:

  1. Executive and managerial decisions shape AI’s impacts on workers, for better and worse. This starts with decisions about business models and operating models, continues through technology acquisitions and implementations, and finally manifests in direct impacts to workers.
  2. Workers have a genuine appreciation for some aspects of AI in their work and how it helps them in their jobs. Their spotlights here point the way to more mutually beneficial approaches to workplace AI.
  3. Workplace AI’s harms are not new or novel — they are repetitions or extensions of harms from earlier technologies and, as such, should be possible to anticipate, mitigate, and eliminate.
  4. Current implementations of AI often serve to reduce workers’ ability to exercise their human skills and talents. Skills like judgment, empathy, and creativity are heavily constrained in these implementations. To the extent that the future of AI is intended to increase humans’ ability to use these talents, the present of AI is sending many workers in the opposite direction.
  5. Empowering workers early in AI development and implementation increases the opportunities to attain the aforementioned benefits and avoid the harms. Workers’ deep experience in their own roles means they should be treated as subject-matter experts throughout the design and implementation process.

In addition, PAI drew from these themes to offer opportunities for impact for the major stakeholders in this space:

  1. AI-implementing companies, who can commit to AI deployments that do not decrease employee job quality.
  2. AI-creating companies, who can center worker well-being and participation in their values, practices, and product designs.
  3. Workers, unions, and worker organizers, who can work to influence and participate in decisions about technology purchases and implementations.
  4. Policymakers, who can shape the environments in which AI products are developed, sold, and implemented.
  5. Investors, who can account for the downside risks posed by practices harmful to workers and the potential value created by worker-friendly technologies.

The actions of each of these groups have the potential to both increase the prosperity enabled by AI technologies and share it more broadly. Together, we can steer AI in a direction that ensures it will benefit workers and society as a whole.

AI and Job Quality

Executive Summary

Introduction

The need for workers’ perspectives on workplace AI

The contributions of this report

Our Approach

Key research questions

Research methods

Site selection

Who we learned from

Participant recruitment

Major Themes and Findings

Theme 1: Executive and managerial decisions shape AI’s impacts on workers, for better and worse

Theme 2: Workers appreciate how some uses of AI have positively changed their jobs

Theme 3: Workplace AI harms repeat, continue, or intensify known possible harms from earlier technologies

Theme 4: Current implementations of AI in work are reducing workers’ opportunities for autonomy, judgment, empathy, and creativity

Theme 5: Empowering workers early in AI development and implementation increases opportunities to implement AI that benefits workers as well as their employers

Opportunities for Impact

Stakeholder 1: AI-implementing companies

Stakeholder Group 2: AI-creating companies

Stakeholder Group 3: Workers, unions, and worker organizers

Stakeholder Group 4: Policymakers

Stakeholder Group 5: Investors

Conclusion

Acknowledgements

Appendix 1: Detailed Site and Technology Descriptions

Appendix 2: Research Methods

Sources Cited

  1. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf.
  2. Michael Chui et al., “Global AI Survey 2021,” Survey (McKinsey u0026amp; Company, December 8, 2021), https://ceros.mckinsey.com/global-ai-survey-2020-a-desktop-3-1/p/1
  3. Jacques Bughin et al., “Artificial Intelligence: The Next Digital Frontier?,” Discussion Paper (McKinsey Global Institute, June 2017), https://www.mckinsey.com/~/media/mckinsey/industries/advanced%20electronics/our%20insights/how%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/mgi-artificial-intelligence-discussion-paper.ashx
  4. Partnership on AI, “Redesigning AI for Shared Prosperity: An Agenda” (Partnership on AI, May 2021), https://partnershiponai.org/paper/redesigning-ai-agenda/
  5. David Autor, David A. Mindell, and Elisabeth B. Reynolds, The Work of the Future: Building Better Jobs in an Age of Intelligent Machines (The MIT Press, 2022), https://doi.org/10.7551/mitpress/14109.001.0001
  6. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf
  7. Lant Pritchett, “The Future of Jobs Is Facing One, Maybe Two, of the Biggest Price Distortions Ever,” Middle East Development Journal 12, no. 1 (January 2, 2020): 131–56, https://doi.org/10.1080/17938120.2020.1714347
  8. James K. Harter, Frank L. Schmidt, and Theodore L. Hayes, “Business-Unit-Level Relationship between Employee Satisfaction, Employee Engagement, and Business Outcomes: A Meta-Analysis,” Journal of Applied Psychology 87, no. 2 (2002): 268–79, https://doi.org/10.1037/0021-9010.87.2.268
  9. Kaoru Ishikawa, What Is Total Quality Control? The Japanese Way, trans. David John Lu (Englewood Cliffs, N.J.: Prentice-Hall, 1985)
  10. Gary P. Pisano, The Development Factory: Unlocking the Potential of Process Innovation (Harvard Business Press, 1997)
  11. Terje Slåtten and Mehmet Mehmetoglu, “Antecedents and Effects of Engaged Frontline Employees: A Study from the Hospitality Industry,” in New Perspectives in Employee Engagement in Human Resources (Emerald Group Publishing, 2015)
  12. Kayhan Tajeddini, Emma Martin, and Levent Altinay, “The Importance of Human-Related Factors on Service Innovation and Performance,” International Journal of Hospitality Management 85 (February 1, 2020): 102431, https://doi.org/10.1016/j.ijhm.2019.102431
  13. Sergio Fernandez and David W. Pitts, “Understanding Employee Motivation to Innovate: Evidence from Front Line Employees in United States Federal Agencies,” Australian Journal of Public Administration 70, no. 2 (2011): 202–22, https://doi.org/10.1111/j.1467-8500.2011.00726.x
  14. Edward P. Lazear, “Compensation and Incentives in the Workplace,” Journal of Economic Perspectives 32, no. 3 (August 2018): 195–214, https://doi.org/10.1257/jep.32.3.195
  15. Joan Robinson, The Economics of Imperfect Competition (Springer, 1969)
  16. José Azar, Ioana Marinescu, and Marshall I. Steinbaum, “Labor Market Concentration,” Working Paper, Working Paper Series (National Bureau of Economic Research, December 2017), https://doi.org/10.3386/w24147
  17. Alan Manning, Monopsony in Motion: Imperfect Competition in Labor Markets, Monopsony in Motion (Princeton University Press, 2013), https://doi.org/10.1515/9781400850679
  18. Caitlin Lustig et al., “Algorithmic Authority: The Ethics, Politics, and Economics of Algorithms That Interpret, Decide, and Manage,” in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’16 (New York, NY, USA: Association for Computing Machinery, 2016), 1057–62, https://doi.org/10.1145/2851581.2886426
  19. Aiha Nguyen, “The Constant Boss: Work Under Digital Surveillance” (Data and Society, May 2021), https://datasociety.net/library/the-constant-boss/
  20. Matt Scherer, “Warning: Bossware May Be Hazardous to Your Health” (Center for Democracy u0026amp; Technology, July 2021), https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf
  21. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019)
  22. Alexandra Mateescu and Aiha Nguyen, “Algorithmic Management in the Workplace,” Explainer (Data and Society, February 2019), https://datasociety.net/wp-content/uploads/2019/02/DS_Algorithmic_Management_Explainer.pdf
  23. Daniel Schneider and Kristen Harknett, “Schedule Instability and Unpredictability and Worker and Family Health and Wellbeing,” Working Paper (Washington Center for Equitable Growth, September 2016), http://cdn.equitablegrowth.org/wp-content/uploads/2016/09/12135618/091216-WP-Schedule-instability-and-unpredictability.pdf
  24. V.B. Dubal. “Wage Slave or Entrepreneur?: Contesting the Dualism of Legal Worker Identities.” California Law Review 105, no. 1 (2017): 65–123, https://www.jstor.org/stable/24915689
  25. Ramiro Albrieu, ed., Cracking the Future of Work: Automation and Labor Platforms in the Global South, 2021, https://fowigs.net/wp-content/uploads/2021/10/Cracking-the-future-of-work.-Automation-and-labor-platforms-in-the-Global-South-FOWIGS.pdf
  26. Phoebe V. Moore, “OSH and the Future of Work: Benefits and Risks of Artificial Intelligence Tools in Workplaces,” Discussion Paper (European Agency for Safety and Health at Work, 2019), https://osha.europa.eu/en/publications/osh-and-future-work-benefits-and-risks-artificial-intelligence-tools-workplaces
  27. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015)
  28. Ifeoma Ajunwa, “The ‘Black Box’ at Work,” Big Data u0026amp; Society 7, no. 2 (July 1, 2020): 2053951720966181, https://doi.org/10.1177/2053951720938093
  29. Isabel Ebert, Isabelle Wildhaber, and Jeremias Adams-Prassl, “Big Data in the Workplace: Privacy Due Diligence as a Human Rights-Based Approach to Employee Privacy Protection,” Big Data u0026amp; Society 8, no. 1 (January 1, 2021): 20539517211013052, https://doi.org/10.1177/20539517211013051
  30. Andrea Dehlendorf and Ryan Gerety, “The Punitive Potential of AI,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/the-punitive-potential-of-ai/
  31. Partnership on AI, “Framework for Promoting Workforce Well-Being in the AI-Integrated Workplace” (Partnership on AI, August 2020), https://partnershiponai.org/paper/workforce-wellbeing/
  32. Karen Hao, “Artificial Intelligence Is Creating a New Colonial World Order,” MIT Technology Review, accessed July 24, 2022, https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/
  33. Shakir Mohamed, Marie-Therese Png, and William Isaac, “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence,” Philosophy u0026amp; Technology 33 (December 1, 2020), https://doi.org/10.1007/s13347-020-00405-8
  34. Aarathi Krishnan et al., “Decolonial AI Manyfesto,” https://manyfesto.ai/
  35. OECD.AI (2021), powered by EC/OECD (2021). “Database of National AI Policies.” https://oecd.ai/en/dashboards
  36. Kofi Yeboah, “Artificial Intelligence in Sub-Saharan Africa: Ensuring Inclusivity.” (Paradigm Initiative, December 2021), https://paradigmhq.org/report/artificial-intelligence-in-sub-saharan-africa-ensuring-inclusivity/
  37. Adapted from Qualtrics’ employee lifecycle model, “Employee Lifecycle: The 7 Stages Every Employer Must Understand and Improve,” Qualtrics, https://www.qualtrics.com/experience-management/employee/employee-lifecycle/
  38. Mayank Kumar Golpelwar, Global Call Center Employees in India: Work and Life between Globalization and Tradition (Springer, 2015)
  39. Hye Jin Rho, Shawn Fremstad, and Hayley Brown, “A Basic Demographic Profile of Workers in Frontline Industries” (Center for Economic and Policy Research, April 2020), https://cepr.net/wp-content/uploads/2020/04/2020-04-Frontline-Workers.pdf
  40. U.S. Bureau of Labor Statistics. “All Employees, Warehousing and Storage.” FRED, Federal Reserve Bank of St. Louis. FRED, Federal Reserve Bank of St. Louis, July 2022. https://fred.stlouisfed.org/series/CES4349300001
  41. Lee Rainie et al., “AI and Human Enhancement: Americans’ Openness Is Tempered by a Range of Concerns” (Pew Research Center, March 2022), https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2022/03/PS_2022.03.17_AI-HE_REPORT.pdf
  42. James Manyika et al., “Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wages” (McKinsey Global Institute, November 28, 2017), https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
  43. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019)
  44. International Labour Office. “Women and Men in the Informal Economy: A Statistical Picture (Third Edition).” International Labour Office, 2018. http://www.ilo.org/wcmsp5/groups/public/u002du002d-dgreports/u002du002d-dcomm/documents/publication/wcms_626831.pdf
  45. International Labour Office. “Women and Men in the Informal Economy: A Statistical Picture (Third Edition).” International Labour Office, 2018. http://www.ilo.org/wcmsp5/groups/public/u002du002d-dgreports/u002du002d-dcomm/documents/publication/wcms_626831.pdf
  46. OECD, and International Labour Organization. “Tackling Vulnerability in the Informal Economy,” 2019. https://www.oecd-ilibrary.org/content/publication/939b7bcd-en
  47. James C. Scott, Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed, Yale Agrarian Studies (New Haven, Conn.: Yale Univ. Press, 2008)
  48. Reema Nanavaty, Expert interview with Reema Nanavaty, Director of Self Employed Women’s Association (SEWA), July 11, 2022
  49. Paul E. Spector, “Perceived Control by Employees: A Meta-Analysis of Studies Concerning Autonomy and Participation at Work,” Human Relations 39, no. 11 (November 1, 1986): 1005–16, https://doi.org/10.1177/001872678603901104
  50. Henry Ongori, “A Review of the Literature on Employee Turnover,” African Journal of Business Management 1, no. 3 (June 30, 2007): 049–054, https://academicjournals.org/article/article1380537420_Ongori.pdf
  51. See Virginia Doellgast and Sean O’Brady, “Making Call Center Jobs Better: The Relationship between Management Practices and Worker Stress,” June 1, 2020, https://ecommons.cornell.edu/handle/1813/74307 for additional detail and impacts of punitive managerial uses of monitoring technology in call centers, including increased worker stress
  52. Aiha Nguyen, “The Constant Boss: Work Under Digital Surveillance” (Data and Society, May 2021), https://datasociety.net/library/the-constant-boss/
  53. Matt Scherer, “Warning: Bossware May Be Hazardous to Your Health” (Center for Democracy u0026amp; Technology, July 2021), https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf
  54. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019)
  55. Alexandra Mateescu and Aiha Nguyen, “Algorithmic Management in the Workplace,” Explainer (Data and Society, February 2019), https://datasociety.net/wp-content/uploads/2019/02/DS_Algorithmic_Management_Explainer.pdf
  56. Andrea Dehlendorf and Ryan Gerety, “The Punitive Potential of AI,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/the-punitive-potential-of-ai/
  57. Human Impact Partners and Warehouse Worker Resource Center, “The Public Health Crisis Hidden in Amazon Warehouses,” January 2021, https://humanimpact.org/wp-content/uploads/2021/01/The-Public-Health-Crisis-Hidden-In-Amazon-Warehouses-HIP-WWRC-01-21.pdf
  58. V.B. Dubal. “Wage Slave or Entrepreneur?: Contesting the Dualism of Legal Worker Identities.” California Law Review 105, no. 1 (2017): 65–123, https://www.jstor.org/stable/24915689
  59. Ramiro Albrieu, ed., Cracking the Future of Work: Automation and Labor Platforms in the Global South, 2021, https://fowigs.net/wp-content/uploads/2021/10/Cracking-the-future-of-work.-Automation-and-labor-platforms-in-the-Global-South-FOWIGS.pdf
  60. Daniel Schneider and Kristen Harknett, “Schedule Instability and Unpredictability and Worker and Family Health and Wellbeing,” Working Paper (Washington Center for Equitable Growth, September 2016), http://cdn.equitablegrowth.org/wp-content/uploads/2016/09/12135618/091216-WP-Schedule-instability-and-unpredictability.pdf
  61. Arvind Narayanan, “How to Recognize AI Snake Oil,” https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
  62. Frederike Kaltheuner, ed., Fake AI (Meatspace Press, 2021), https://fakeaibook.com
  63. Aiha Nguyen, “The Constant Boss: Work Under Digital Surveillance” (Data and Society, May 2021), https://datasociety.net/library/the-constant-boss/
  64. Strategic Organizing Center, “Primed for Pain,” May 2021, https://thesoc.org/wp-content/uploads/2021/02/PrimedForPain.pdf
  65. Alessandro Delfanti and Bronwyn Frey, “Humanly Extended Automation or the Future of Work Seen through Amazon Patents,” Science, Technology, u0026amp; Human Values 46, no. 3 (May 1, 2021): 655–82, https://doi.org/10.1177/0162243920943665
  66. Phoebe V. Moore, “OSH and the Future of Work: Benefits and Risks of Artificial Intelligence Tools in Workplaces,” Discussion Paper (European Agency for Safety and Health at Work, 2019), https://osha.europa.eu/en/publications/osh-and-future-work-benefits-and-risks-artificial-intelligence-tools-workplaces
  67. Strategic Organizing Center, “Primed for Pain,” May 2021, https://thesoc.org/wp-content/uploads/2021/02/PrimedForPain.pdf
  68. Annette Bernhardt, Lisa Kresge, and Reem Suleiman, “Data and Algorithms at Work: The Case for Worker” (UC Berkeley Labor Center, November 2021), https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf
  69. Andrea Dehlendorf and Ryan Gerety, “The Punitive Potential of AI,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/the-punitive-potential-of-ai/
  70. Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible Are Jobs to Computerisation?,” Technological Forecasting and Social Change 114 (January 1, 2017): 254–80, https://doi.org/10.1016/j.techfore.2016.08.019
  71. “These Are the Top 10 Job Skills of Tomorrow – and How Long It Takes to Learn Them,” World Economic Forum, https://www.weforum.org/agenda/2020/10/top-10-work-skills-of-tomorrow-how-long-it-takes-to-learn-them/
  72. Daniel Susskind, “Technological Unemployment,” in The Oxford Handbook of AI Governance, ed. Justin Bullock et al. (Oxford University Press), https://doi.org/10.1093/oxfordhb/9780197579329.013.42
  73. Christopher Mims, “Self-Driving Cars Could Be Decades Away, No Matter What Elon Musk Said,” WSJ, https://www.wsj.com/articles/self-driving-cars-could-be-decades-away-no-matter-what-elon-musk-said-11622865615
  74. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019)
  75. Erik Brynjolfsson, “The Turing Trap: The Promise u0026amp; Peril of Human-Like Artificial Intelligence,” January 11, 2022, https://doi.org/10.48550/arXiv.2201.04200
  76. World Economic Forum. “Positive AI Economic Futures.” Insight Report. World Economic Forum, November 2021. https://www.weforum.org/reports/positive-ai-economic-futures/
  77. Nithya Sambasivan and Rajesh Veeraraghavan, “The Deskilling of Domain Expertise in AI Development,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22 (New York, NY, USA: Association for Computing Machinery, 2022), 1–14, https://doi.org/10.1145/3491102.3517578
  78. Sabrina Genz, Lutz Bellmann, and Britta Matthes, “Do German Works Councils Counter or Foster the Implementation of Digital Technologies?,” Jahrbücher Für Nationalökonomie Und Statistik 239, no. 3 (June 1, 2019): 523–64, https://doi.org/10.1515/jbnst-2017-0160
  79. Alan G. Robinson and Dean M. Schroeder, “The Role of Front-Line Ideas in Lean Performance Improvement,” Quality Management Journal 16, no. 4 (January 1, 2009): 27–40, https://doi.org/10.1080/10686967.2009.11918248
  80. Jeffrey K. Liker, The Toyota Way: 14 Management Principles From the World’s Greatest Manufacturer (McGraw Hill Professional, 2003)
  81. Taiichi Ohno, Toyota Production System: Beyond Large-Scale Production (CRC Press, 1988)
  82. Kayhan Tajeddini, Emma Martin, and Levent Altinay, “The Importance of Human-Related Factors on Service Innovation and Performance,” International Journal of Hospitality Management 85 (February 1, 2020): 102431, https://doi.org/10.1016/j.ijhm.2019.102431
  83. Katherine C. Kellogg, Mark Sendak, and Suresh Balu, “AI on the Front Lines,” MIT Sloan Management Review, May 4, 2022, https://sloanreview.mit.edu/article/ai-on-the-front-lines/
  84. Zeynep Ton, “The Good Jobs Solution,” Harvard Business Review, 2017, 32. https://goodjobsinstitute.org/wp-content/uploads/2018/03/Good-Jobs-Solution-Full-Report.pdf
  85. Abigail Gilbert et al., “Case for Importance: Understanding the Impacts of Technology Adoption on ‘Good Work’” (Institute for the Future of Work, May 2022), https://uploads-ssl.webflow.com/5f57d40eb1c2ef22d8a8ca7e/62a72d3439edd66ed6f79654_IFOW_Case%20for%20Importance.pdf
  86. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf
  87. Julian Posada, “The Future of Work Is Here: Toward a Comprehensive Approach to Artificial Intelligence and Labour,” Ethics of AI in Context, 2020, http://arxiv.org/abs/2007.05843
  88. Jeffrey Brown, “The Role of Attrition in AI’s ‘Diversity Problem’” (Partnership on AI, April 2021), https://partnershiponai.org//wp-content/uploads/dlm_uploads/2022/04/PAI_researchpaper_aftertheoffer.pdf
  89. Tina M Park, “Making AI Inclusive: 4 Guiding Principles for Ethical Engagement” (Partnership on AI, July 2022), https://partnershiponai.org//wp-content/uploads/dlm_uploads/2022/07/PAI_whitepaper_making-ai-inclusive.pdf
  90. Fabio Urbina et al., “Dual Use of Artificial-Intelligence-Powered Drug Discovery,” Nature Machine Intelligence 4, no. 3 (March 2022): 189–91, https://doi.org/10.1038/s42256-022-00465-9
  91. Aarathi Krishnan et al., “Decolonial AI Manyfesto,” accessed July 24, 2022, https://manyfesto.ai/
  92. Lama Nachman, “Beyond the Automation-Only Approach,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/beyond-the-automation-only-approach/
  93. Christina Colclough, “Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table,” in Digital Work in the Planetary Market, –International Development Research Centre Series (MIT Press, 2022), https://idl-bnc-idrc.dspacedirect.org/bitstream/handle/10625/61034/IDL-61034.pdf
  94. Christina Colclough, “When Algorithms Hire and Fire,” International Union Rights 25, no. 3 (2018): 6–7. https://muse.jhu.edu/article/838277/summary
  95. Brishen Rogers, “The Law and Political Economy of Workplace Technological Change,” Harvard Civil Rights-Civil Liberties Law Review 55 (2020): 531
  96. Wilneida Negrón, “Little Tech Is Coming for Workers” (Coworker.org, 2021), https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf
  97. Jeremias Adams-Prassl, “What If Your Boss Was an Algorithm? Economic Incentives, Legal Challenges, and the Rise of Artificial Intelligence at Work,” Comparative Labor Law u0026amp; Policy Journal 41 (2021 2019): 123
  98. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf
  99. Kofi Yeboah, “Artificial Intelligence in Sub-Saharan Africa: Ensuring Inclusivity.” (Paradigm Initiative, December 2021), https://paradigmhq.org/report/artificial-intelligence-in-sub-saharan-africa-ensuring-inclusivity/
  100. Fekitamoeloa ‘Utoikamanu, “Closing the Technology Gap in Least Developed Countries,” United Nations (United Nations), accessed July 25, 2022, https://www.un.org/en/chronicle/article/closing-technology-gap-least-developed-countries
  101. Annette Bernhardt, Lisa Kresge, and Reem Suleiman, “Data and Algorithms at Work: The Case for Worker” (UC Berkeley Labor Center, November 2021), https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf
  102. Allison Levitsky, “California Might Require Employers to Disclose Workplace Surveillance,” Protocol, April 21, 2022, https://www.protocol.com/bulletins/ab-1651-california-workplace-surveillance
  103. “The EU Artificial Intelligence Act,” The AI Act, September 7, 2021, https://artificialintelligenceact.eu/
  104. Daron Acemoglu, Andrea Manera, and Pascual Restrepo, “Does the US Tax Code Favor Automation?,” Working Paper, Working Paper Series (National Bureau of Economic Research, April 2020), https://doi.org/10.3386/w27052
  105. Emmanuel Moss et al., “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest” (Data and Society, June 2021), https://datasociety.net/wp-content/uploads/2021/06/Assembling-Accountability.pdf
  106. Kofi Yeboah, “Artificial Intelligence in Sub-Saharan Africa: Ensuring Inclusivity.” (Paradigm Initiative, December 2021), https://paradigmhq.org/report/artificial-intelligence-in-sub-saharan-africa-ensuring-inclusivity/
  107. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf
  108. Business Roundtable, “Statement on the Purpose of a Corporation,” July 2021, https://s3.amazonaws.com/brt.org/BRT-StatementonthePurposeofaCorporationJuly2021.pdf
  109. Larry Fink, “Larry Fink’s Annual 2022 Letter to CEOs,” accessed May 27, 2022, https://www.blackrock.com/corporate/investor-relations/larry-fink-ceo-letter
  110. Katanga Johnson, “U.S. SEC Chair Provides More Detail on New Disclosure Rules, Treasury Market Reform | Reuters,” https://www.reuters.com/business/sustainable-business/sec-considers-disclosure-mandate-range-climate-metrics-2021-06-23/
  111. “Your Guide to Amazon’s 2022 Shareholder Event,” United for Respect, accessed May 27, 2022, https://united4respect.org/amazon-shareholders/
Table of Contents
1
2
3
4
5
6
7
8
9
10

Making AI Inclusive: 4 Guiding Principles for Ethical Engagement

Tina Park

Introduction

While the concept of “human-centered design” is hardly new to the technology sector, recent years have seen growing efforts to build inclusive artificial intelligence (AI) and machine learning (ML) products. Broadly, inclusive AI/ML refers to algorithmic systems which are created with the active engagement of and input from people who are not on AI/ML development teams. This includes both end users of the systems and non-users who are impacted by the systems.“Impacted non-user” refers to people who are impacted by the deployment of an AI/ML system, but are not the direct user or customer of that system. For example, in the case of students in the United Kingdom in 2020 whose A-level grades were determined by an algorithm, the “user” of the algorithmic system is Ofqual, the official exam regulator in England, and the students are “impacted non-users.” To collect this input, practitioners are increasingly turning to engagement practices like user experience (UX) research and participatory design.

Amid rising awareness of structural inequalities in our society, embracing inclusive research and design principles helps signal a commitment to equitable practices. As many proponents have pointed out, it also makes for good business: Understanding the needs of a more diverse set of people expands the market for a given product or service. Once engaged, these people can then further improve an AI/ML product, identifying issues like bias in algorithmic systems.

Despite these benefits, however, there remain significant challenges to greater adoption of inclusive development in the AI/ML field. There are also important opportunities. For AI practitioners, AI ethics researchers, and others interested in learning more about responsible AI, this Partnership on AI (PAI) white paper provides guidance to help better understand and overcome the challenges related to engaging stakeholders in AI/ML development.

Ambiguities around the meaning and goals of “inclusion” present one of the central challenges to AI/ML inclusion efforts. To make the changes needed for a more inclusive AI that centers equity, the field must first find agreement on foundational premises regarding inclusion. Recognizing this, this white paper provides four guiding principles for ethical engagement grounded in best practices:

  1. All participation is a form of labor that should be recognized
  2. Stakeholder engagement must address inherent power asymmetries
  3. Inclusion and participation can be integrated across all stages of the development lifecycle
  4. Inclusion and participation must be integrated to the application of other responsible AI principles

To realize ethical participatory engagement in practice, this white paper also offers three recommendations aligned with these principles for building inclusive AI:

  1. Allocate time and resources to promote inclusive development
  2. Adopt inclusive strategies before development begins
  3. Train towards an integrated understanding of ethics

This white paper’s insights are derived from the research study “Towards An Inclusive AI: Challenges and Opportunities for Public Engagement in AI Development.” That study drew upon discussions with industry experts, a multidisciplinary review of existing research on stakeholder and public engagement, and nearly 70 interviews with AI practitioners and researchers, as well as data scientists, UX researchers, and technologists working on AI and ML projects, over a third of whom were based in areas outside of the US, EU, UK, or Canada. Supplemental interviews with social equity and Diversity, Equity, and Inclusion (DEI) advocates contributed to the development of recommendations for individual practitioners, business team leaders, and the field of AI and ML more broadly.

This white paper does not provide a step-by-step guide for implementing specific participatory practices. It is intended to renew discussions on how to integrate a wider range of insights and experiences into AI/ML technologies, including those of both users and the people impacted (either directly or indirectly) by these technologies. Such conversations — between individuals, inside teams, and within organizations — must be had to spur the changes needed to develop truly inclusive AI.

Making AI Inclusive: 4 Guiding Principles for Ethical Engagement

Introduction

Guiding Principles for Ethical Participatory Engagement

Principle 1: All Participation Is a Form of Labor That Should Be Recognized

Principle 2: Stakeholder Engagement Must Address Inherent Power Asymmetries

Principle 3: Inclusion and Participation Can Be Integrated Across All Stages of the Development Lifecycle

Principle 4: Inclusion and Participation Must Be Integrated to the Application of Other Responsible AI Principles

Recommendations for Ethical Engagement in Practice

Recommendation 1: Allocate Time and Resources to Promote Inclusive Development

Recommendation 2: Adopt Inclusive Development Strategies Before Development Begins

Recommendation 3: Train Towards an Integrated Understanding of Ethics

Conclusion

Acknowledgements

Sources Cited

  1. Jean-Baptiste, A. (2020). Building for Everyone: Expand Your Market with Design Practices from Google’s Product Inclusion Team. John Wiley and Sons, Inc.
  2. Romao, M. (2019, June 27). “A vision for AI: Innovative, Trusted and Inclusive.” Policy@Intel. https://community.intel.com/t5/Blogs/Intel/Policy-Intel/A-vision-for-AI-Innovative-Trusted-and-Inclusive/post/1333103
  3. Zhou, A., Madras, D., Raji, D., Milli, S., Kulynych, B. and Zemel, R. (2020, July 17). “Participatory Approaches to Machine Learning.” (Workshop). International Conference on Machine Learning 2020.
  4. Lewis, J. E., Abdilla, A., Arista, N., Baker, K., Benesiinaabandan, S., Brown, M., ... and Whaanga, H. (2020). Indigenous protocol and artificial intelligence position paper. Indigenous AI. https://www.indigenous-ai.net/position-paper
  5. Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press.
  6. Hamraie, A., and Fritsch, K. (2019). “Crip technoscience manifesto.” Catalyst: Feminism, Theory, Technoscience, 5(1), 1-33. https://catalystjournal.org/index.php/catalyst/article/view/29607
  7. Taylor, L. (2017). “What is data justice? The case for connecting digital rights and freedoms globally.” Big Data and Society, 4(2), 2053951717736335. https://doi.org/10.1177/2053951717736335
  8. Benjamin, Ruha. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity. https://www.ruhabenjamin.com/race-after-technology
  9. Hanna, A., Denton, E., Smart, A., and Smith-Loud, J. (2020). “Towards a critical race methodology in algorithmic fairness.” In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 501-512). https://arxiv.org/abs/1912.03593
  10. Sloane, M., Moss, E., Awomolo, O. and Forlano, L. (2020). ''Participation is not a design fix for machine learning.'' arXiv. https://arxiv.org/abs/2007.02423
  11. Cifor, M., Garcia, P., Cowan, T.L., Rault, J., Sutherland, T., Chan, A., Rode, J., Hoffmann, A.L., Salehi, N. and Nakamura, L. (2019). “Feminist Data Manifest-No.” Feminist Data Manifest-No. Retrieved October 1, 2020 from https://www.manifestno.com/home
  12. Harrington, C., Erete, S. and Piper, A.M. (2019). “Deconstructing Community-Based Collaborative Design: Towards More Equitable Participatory Design Engagements.” In Proceedings of the ACM on Human-Computer Interaction 3(CSCW):1–25. https://doi.org/10.1145/3359318
  13. Freimuth V.S., Quinn, S.C., Thomas, S.B., Cole, G., Zook, E and Duncan, T. (2001). “African Americans’ Views on Research and the Tuskegee Syphilis Study.” Social Science and Medicine 52(5):797–808. https://doi.org/10.1016/S0277-9536(00)00178-7
  14. George, S., Duran, N. and Norris, K. (2014). “A Systematic Review of Barriers and Facilitators to Minority Research Participation Among African Americans, Latinos, Asian Americans, and Pacific Islanders.” American Journal of Public Health 104(2):e16–31. https://doi.org/10.2105/AJPH.2013.301706
  15. Barabas, C., Doyle, C., Rubinovitz, J.B., and Dinakar, K. (2020). “Studying Up: Reorienting the Study of Algorithmic Fairness around Issues of Power.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 167-176).
  16. Harrington, C., Erete, S. and Piper, A.M.. (2019). “Deconstructing Community-Based Collaborative Design: Towards More Equitable Participatory Design Engagements.” In Proceedings of the ACM on Human-Computer Interaction 3(CSCW):1–25. https://dl.acm.org/doi/10.1145/3359318.
  17. Chan, A., Okolo, C. T., Terner, Z., and Wang, A. (2021). “The Limits of Global Inclusion in AI Development.” arXiv. https://arxiv.org/abs/2102.01265
  18. Sanders, E. B. N. (2002). “From user-centered to participatory design approaches.” In Design and the social sciences (pp. 18-25). CRC Press. https://www.taylorfrancis.com/chapters/edit/10.1201/9780203301302-8/user-centered-participatory-design-approaches-elizabeth-sanders
  19. Leslie, D., Katell, M., Aitken, M., Singh, J., Briggs, M., Powell, R., ... and Burr, C. (2022). “Data Justice in Practice: A Guide for Developers.” arXiv. https://arxiv.org/ftp/arxiv/papers/2205/2205.01037.pdf
  20. Zdanowska, S., and Taylor, A. S. (2022). “A study of UX practitioners roles in designing real-world, enterprise ML systems.” In CHI Conference on Human Factors in Computing Systems (pp. 1-15). https://dl.acm.org/doi/abs/10.1145/3491102.3517607
  21. Leslie, D., Katell, M., Aitken, M., Singh, J., Briggs, M., Powell, R., ... and Burr, C. (2022). “Data Justice in Practice: A Guide for Developers.” arXiv. https://arxiv.org/ftp/arxiv/papers/2205/2205.01037.pdf
  22. Saulnier, L., Karamcheti, S., Laurençon, H., Tronchon, L., Wang, T., Sanh, V., Singh, A., Pistilli, G., Luccioni, S., Jernite, Y., Mitchell, M. and Kiela, D. (2022). “Putting Ethical Principles at the Core of the Research Lifecycle.” Hugging Face Blog. Retrieved from https://huggingface.co/blog/ethical-charter-multimodal
  23. Ada Lovelace Institute. (2021). “Participatory data stewardship: A framework for involving people in the use of data.” Ada Lovelace Institute. https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/
  24. Delgado, F., Yang, S., Madaio, M., and Yang, Q. (2021). “Stakeholder Participation in AI: Beyond ‘Add Diverse Stakeholders and Stir.’” arXiv. https://arxiv.org/pdf/2111.01122.pdf
  25. Sloane, M., Moss, E., Awomolo, O. and Forlano, L. (2020). ''Participation is not a design fix for machine learning.'' arXiv. https://arxiv.org/abs/2007.02423
Table of Contents
1
2
3
4
5
6

After the Offer: The Role of Attrition in AI’s ‘Diversity Problem’

Jeffrey Brown

Executive Summary

As a field, AI struggles to retain team members from diverse backgrounds. Given the far-reaching effects of algorithmic systems and the documented harms to marginalized communities, the fact that these communities are not represented on AI teams is particularly troubling. Why is this such a widespread phenomenon and what can be done to close the gap? This research paper, “After the Offer: The Role of Attrition in AI’s ‘Diversity Problem’” seeks to answer these questions, providing four recommendations for how organizations can make the AI field more inclusive. Click the button below to download a summary of these recommendations or continue on to read the paper in full.

Summary of Recommendations

Amid heightened attention to society-wide racial and social injustice, organizations in the AI space have been urged to investigate the harmful effects that AI has had on marginalized populations. It’s an issue that engineers, researchers, project managers, and various leaders in both tech companies and civil society organizations have devoted significant time and resources to in recent years. In examining the effects of AI, organizations must consider who exactly has been designing these technologies.

Diversity reports have revealed that the people working at the organizations that develop and deploy AI lack diversity across several dimensions. While organizations have blamed pipeline problems in the past, research has increasingly shown that once workers belonging to minoritized identities get hired in these spaces, systemic difficulties affect their experiences in ways that their peers from dominant groups do not have to worry about.

Attrition in the tech industry is a problem that disproportionately affects minoritized workers. In AI, where technologies already have a disproportionately negative impact on these communities, this is especially troublesome.

Minoritized Workers

This report uses minoritized workers as an umbrella term to refer to people whose identities (in categories such as race, ethnicity, gender, or ability) have been historically marginalized by those in dominant social groups. The minoritized workers in this study include people who identified as minoritized within the identity categories of race and ethnicity, gender identity, sexual orientation, ability, and immigration status. Because this study was international in scope, it is important to note that these categories are relative to their social context.

We are left wondering: What leads to these folks leaving their teams, organizations, or even the AI field more broadly? What about the AI field in particular influences these people to stay or leave? And what can organizations do to stem this attrition to make their environments more inclusive?

The current study uses interviews with folks belonging to minoritized identities across the AI field, managers, and DEI (diversity, equity, and inclusion)- leaders in tech to get rich information about what aspects of cultures within an organization promote inclusion or contribute to attrition. Themes that emerged during these interviews formed 3 key takeaways:

  1. Diversity makes for better team climates
  2. Systemic supports are difficult but necessary to undo the current harms to minoritized workers
  3. Individual efforts to change organizational culture fall disproportionately on minoritized folks who are usually not professionally rewarded for their efforts

In line with these takeaways, the study makes 4 recommendations about what can be done to make the AI field more inclusive for workers:

  1. Organizations must systemically support ERGs
  2. Organizations must intentionally diversify leadership and managers
  3. DEI trainings must be specific in order to be effective and be more connected to the content of AI work
  4. Organizations must interrogate their values as practiced and fundamentally alter them to include the perspectives of people who are not White, cis, or male

These takeaways and recommendations are explored in more depth below.

Key Takeaways

Key Takeaways

1. Diversity makes for better team climates

Across interviews, participants consistently expressed that managers who belonged to minoritized identities or who took the time to learn about working with diverse identities were more supportive of their needs and career goals. Such efforts reportedly resulted in teams that were also more diverse, inclusive, interdisciplinary, and engendering of a positive team culture/climate. In these environments, workers belonging to minoritized identities thrived. A diversity in backgrounds and perspectives was particularly important for AI teams that needed to solve interdisciplinary problems.

Conversely, the negative impact of work environments that were sexist or where participants experienced acts of prejudice such as microaggressions was also a recurring theme.

While collaborative or positive work environments were also a common theme, such environments did not in themselves negate predominant cultures which deprioritized “DEI-focused” work, work that was highly interdisciplinary, or work that did not serve the dominant group. Negative organizational cultures seemed to exacerbate experiences of prejudice or discrimination on AI teams.

2. Systemic supports are difficult but necessary to undo the current harms to minoritized workers

Participants belonging to minoritized identities said that they either left or intended to leave organizations that did not support their continued career growth or possessed values that did not align with their own. Consistent with this, participants described examples of their organizations not valuing the content of their work.

Participants also tied their desires to leave with instances of prejudice or discrimination, which may also be related to “toxic” work environments. Some participants reported instances of being tokenized or being subject to negative stereotypes about their identity groups, somewhat reflective of wider contexts in tech beyond AI.

Systemic supports include incentive structures that allow minoritized workers to succeed at every level, from the teams that they work with actively validating their experiences to their managers finding the best ways for them to deliver work products in accordance with both individual and institutional needs. Guidelines for promotion that recognize the barriers these workers face in environments mostly occupied by dominant group norms are another important support.

3. Individual efforts to change organizational culture fall disproportionately on minoritized folks who are usually not professionally rewarded for their efforts

Individuals discussed ways in which they tried to make their workplaces or teams more inclusive or otherwise sought to incorporate diverse perspectives into their work around AI. Participants sometimes had to contend with bias against DEI efforts, reporting that other workers in their organizations would dismiss their efforts as lacking rigor or focus on the product.

There were some institutional efforts to foster a more inclusive culture, most commonly DEI trainings. DEI trainings that were very specific to some groups (e.g., gender diverse folks, Black people) were reported as being the most effective. However, even when they were specific, DEI trainings seemed to be disconnected from some aspects of the workplace climate or the content of what teams were working on.

Participants who mentioned Employee Resource Groups (ERGs) uniformly praised them, discussing the huge positive impact they had on a personal level, forming the bases of their social support networks in their organizations and having a strong impact on their ability to integrate aspects of their identities or other “DEI topics” they were passionate about into their work.

Recommendations

Recommendations

1. Organizations must systemically support ERGs

Employees specifically named ERGs as one of their main sources of support even in work environments that were otherwise toxic.. Additionally, ERGs provided built-in mentorship for those who did not have ready access to mentors or whose supervisors had not done the work to understand the kinds of support needed for those of minoritized identities to thrive in predominantly White and male environments.

What makes this recommendation work?

Within these ERGs, there existed other grass-roots initiatives that supported workers, such as informal talking circles and networks of employees that essentially provided peer mentoring that participants found crucial to navigating White- and male-dominated spaces. The mentorship provided by ERGs was also essential when HR failed to provide systemic support for staff and instead prioritized protecting the organization.

What must be in place?

While participants uniformly praised ERGs, they required large amounts of time from staff members that detracted from their work. Such groups also ran the risk of getting taken over by leadership and having their original mission derailed. Institutions should seek a balance between supporting these groups and giving them the freedom to organize in pursuit of their own best interests.

What won’t this solve?

ERGs will not necessarily make an organization’s AI or tech more inclusive. Rather, systematically supporting ERGs will provide more support and community for minoritized workers, which is meant to promote a more inclusive workplace in general.

2. Organizations must intentionally diversify leadership and managers
What makes this recommendation work?

Participants repeatedly pointed to managers and upper-level leaders who belonged to minoritized identities (especially racial ones) as important influences, changing policy that permeated through various levels of their organizations. A diverse workforce may also bring with it multiple perspectives, including those belonging to people from different disciplines who may be interested in working in the AI field due to the opportunity for interdisciplinary collaboration, research, and product development. Bringing in folks from various academic, professional, and technical backgrounds to solve problems is especially crucial for AI teams.

What must be in place?

There must be understanding about the reasons behind the lack of diversity and the “bigger picture” of how powerful groups more easily perpetuate power structures already in place. Participants spoke of managers who did not belong to minoritized identities themselves but who took the time to learn in depth about differences in power and privilege in the tech ecosystem, appreciating the diverse perspectives that workers brought. These managers, while not perfect, tended to take advocating for their reports very seriously, particularly female reports who often went overlooked.

What won’t this solve?

Intentionally diversifying leadership and managers will not automatically create a pipeline for diversity at the leadership level, nor will it automatically override institutional culture or policies that ignore DEI best practices.

3. DEI trainings must be specific in order to be effective and be more connected to the content of AI work
What makes this recommendation work?

Almost all participants reported that their organizations mandated some form of DEI training for all staff. These ranged widely, from very general ones to very specific trainings that discussed cultural competency about more specific groups of people (e.g., participants reported that there were trainings on anti-Black racism). Participants discussed that the more specific trainings tended to be more impactful.

What must be in place?

Organizations must invest in employees who see the importance of inclusive values in AI research and product design. Participants pointed to the importance of managers who had an ability to foster inclusive team values, which was not something that HR could mandate.

What won’t this solve?

As several participants observed, DEI trainings will not uproot or counteract institutional stigmas against DEI. It would take sustained effort and deliberate alignment of values for an organization to emphasize DEI in its work.

4. Organizations must interrogate their values as practiced and fundamentally alter them to include the perspectives of people who are not White, cis, or male
What makes this recommendation work?

Participants frequently reported that a misalignment of values was a primary reason for them leaving their organizations or wanting to leave their organizations. Participants in this sample discussed joining the AI field to create a positive impact while growing professionally. This led them to feeling disappointed when their organizations did not prioritize these goals (despite them being among their stated values).

What must be in place?

Participants found it frustrating when organizations stated that they valued diversity and then failed to live up to this value with hiring, promotion, and day-to-day operations, ignoring the voices of minoritized individuals. If diversity is truly a value, organizations may have to investigate their systems of norms and expectations that are fundamentally male, Eurocentric, and do not make space for those from diverse backgrounds. They then must take additional steps to consider how such systems influence their work in AI.

What won’t this solve?

Because achieving a fundamental re-alignment like this is a more comprehensive solution, it cannot satisfy the most immediate and urgent needs for reform. Short-term, organizations must work with DEI professionals to recognize how they are perpetuating potentially harmful norms of the dominant group and work to create policies that are more equitable. Longer term fixes may not, for instance, satisfy the immediate and urgent need for more diversity in leadership and teams in general.

After the Offer: The Role of Attrition in AI’s ‘Diversity Problem’

Executive Summary

Key Takeaways

Recommendations

Introduction

Why Study Attrition of Minoritized Workers in AI?

Background

Problems Due to Lack of Diversity of AI Teams

More Diverse Teams Yield Better Outcomes

Current Level of Diversity in Tech

Diversity in AI

What Has Been Done

What Has Been Done

What Has Been Done

Attrition in Tech

Current Study and Methodology

Recruitment

Participants

Measure

Procedure

Analysis

Results

Attrition

Culture

Efforts to Improve Inclusivity

Summary and the Path Forward

Acknowledgements

Appendices

Appendix 1: Recruitment Document

Appendix 2: Privacy Document

Appendix 3: Research Protocol

Appendix 4: Important Terms

Sources Cited

  1. Buolamwini, J., u0026amp; Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
  2. Zhao, D., Wang, A., u0026amp; Russakovsky, O. (2021). Understanding and Evaluating Racial Biases in Image Captioning. arXiv preprint arXiv:2106.08503.
  3. Feldstein, S. (2021). The Global Expansion of AI Surveillance. Carnegie Endowment for International Peace. Retrieved 17 September 2019, from https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847.
  4. Firth, N. (2021). Apple Card is being investigated over claims it gives women lower credit limits. MIT Technology Review. Retrieved 23 November 2021, from https://www.technologyreview.com/2019/11/11/131983/apple-card-is-being-investigated-over-claims-it-gives-women-lower-credit-limits/.
  5. Howard, A., u0026amp; Isbell, C. (2021). Diversity in AI: The Invisible Men and Women. MIT Sloan Management Review. Retrieved 21 September 2020, from https://sloanreview.mit.edu/article/diversity-in-ai-the-invisible-men-and-women/.
  6. AI Now. (2019). Discriminating Systems: Gender, Race, and Power in AI (Ebook). Retrieved 23 November 2021.
  7. Swauger, S. (2021). Opinion | What's worse than remote school? Remote test-taking with AI proctors. NBC News. Retrieved 7 November 2020, from https://www.nbcnews.com/think/opinion/remote-testing-monitored-ai-failing-students-forced-undergo-it-ncna1246769
  8. Belani, G. (2021). AI Paving the Way for Remote Work | IEEE Computer Society. Computer.org. Retrieved 26 July 2021, from https://www.computer.org/publications/tech-news/trends/remote-working-easier-with-ai
  9. Scott, A., Kapor Klein, F., and Onovakpuri, U. (2017). Tech Leavers Study (Ebook). Retrieved 24 November 2021, from https://www.kaporcenter.org/wp-content/uploads/2017/08/TechLeavers2017.pdf
  10. Women in the Workplace (2021). 2021. Retrieved 23 November 2021, from https://www.mckinsey.com/featured-insights/diversity-and-inclusion/women-in-the-workplace
  11. Silicon Valley Bank. (2021). 2020 Global Startup Outlook: Key insights from the Silicon Valley Bank startup outlook survey (Ebook). Retrieved 23 November 2021, from https://www.svb.com/globalassets/library/uploadedfiles/content/trends_and_insights/reports/startup_outlook_report/suo_global_report_2020-final.pdf
  12. Firth, N. (2021). Apple Card is being investigated over claims it gives women lower credit limits. MIT Technology Review. Retrieved 23 November 2021, from https://www.technologyreview.com/2019/11/11/131983/apple-card-is-being-investigated-over-claims-it-gives-women-lower-credit-limits/.
  13. Tomasev, N., McKee, K.R., Kay, J., u0026amp; Mohamed, S. (2021). Fairness for Unobserved Characteristics: Insights from technological impacts on queer communities. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21), Retrieved October 1, 2021 from https://doi.org/10.1145/3461702.3462540
  14. Martinez, E., u0026amp; Kirchner, L. (2021). The secret bias hidden in mortgage-approval algorithms | AP News. AP News. Retrieved 24 November 2021, from https://apnews.com/article/lifestyle-technology-business-race-and-ethnicity-mortgages-2d3d40d5751f933a88c1e17063657586
  15. Turner Lee, N., Resnick, P., u0026amp; Barton, G. (2021). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings. Retrieved 24 November 2021, from https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.
  16. Rock, D., u0026amp; Grant, H. (2016). Why diverse teams are smarter. Harvard Business Review, 4(4), 2-5.
  17. Wang, J., Cheng, G. H. L., Chen, T., u0026amp; Leung, K. (2019). Team creativity/innovation in culturally diverse teams: A meta‐analysis. Journal of Organizational Behavior, 40(6), 693-708.
  18. Lorenzo, R., Voigt, N., Tsusaka, M., Krentz, M., u0026amp; Abouzahr, K. (2018). How Diverse Leadership Teams Boost Innovation. BCG Global. Retrieved 24 November 2021, from https://www.bcg.com/publications/2018/how-diverse-leadership-teams-boost-innovation
  19. Hoobler, J. M., Masterson, C. R., Nkomo, S. M., u0026amp; Michel, E. J. (2018). The business case for women leaders: Meta-analysis, research critique, and path forward. Journal of Management, 44(6), 2473-2499.
  20. Chakravorti, B. (2020). To Increase Diversity, U.S. Tech Companies Need to Follow the Talent. Harvard Business Review. Retrieved 24 November 2021, from https://hbr.org/2020/12/to-increase-diversity-u-s-tech-companies-need-to-follow-the-talent.
  21. Accenture. (2018). Getting to Equal 2018: The Disability Inclusion Advantage. Retrieved from https://www.accenture.com/_acnmedia/pdf-89/accenture-disability-inclusion-research-report.pdf
  22. Whittaker, M., Alper, M., Bennett, C. L., Hendren, S., Kaziunas, L., Mills, M., ... u0026amp; West, S. M. (2019). Disability, bias, and AI. AI Now Institute.
  23. Heater, B. (2020). Tech companies respond to George Floyd’s death, ensuing protests and systemic racism. Techcrunch.com. Retrieved 24 November 2021, from https://techcrunch.com/2020/06/01/tech-co-protests/.
  24. Google (2021). 2021 Diversity Annual Report. Retrieved 24 November 2021, from https://static.googleusercontent.com/media/diversity.google/en//annual-report/static/pdfs/google_2021_diversity_annual_report.pdf?cachebust=2e13d07.
  25. Facebook. (2021). Facebook Diversity Update: Increasing Representation in Our Workforce and Supporting Minority-Owned Businesses | Meta. Meta. Retrieved 24 November 2021, from https://about.fb.com/news/2021/07/facebook-diversity-report-2021/.
  26. Amazon Staff. (2020). Our workforce data. US About Amazon. Retrieved 24 November 2021, from https://www.aboutamazon.com/news/workplace/our-workforce-data
  27. Adobe. (2021). Adobe Diversity By the Numbers. Adobe. Retrieved 24 November 2021, from https://www.adobe.com/diversity/data.html
  28. National Center for Women in Tech. (2020). NCWIT Scorecard: The Status of Women in Computing (2020 Update). Retrieved https://ncwit.org/resource/scorecard/
  29. Center for American Progress (2012). The State of diversity in Today’s workforce. Retrieved from https://www.americanprogress.org/article/the-state-of-diversity-in-todays-workforce/
  30. Gillenwater, S. (2020). Meet the CIOs of the Fortune 500 — 2021 edition. Boardroom Insiders. Retrieved from https://www.boardroominsiders.com/blog/meet-the-cios-of-the-fortune-500-2021-edition
  31. Stack Overflow. (2020). 2020 Developer Survey. Retrieved from https://insights.stackoverflow.com/survey/2020#developer-profile-disability-status-mental-health-and-differences
  32. Stanford HAI. (2021). The AI Index Report: Measuring Trends in Artificial intelligence (Ebook). Retrieved 24 November 2021, from https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report-_Chapter-6.pdf.
  33. Chi, N., Lurie, E., u0026amp; Mulligan, D. K. (2021). Reconfiguring Diversity and Inclusion for AI Ethics. arXiv preprint arXiv:2105.02407.
  34. Selyukh, A. (2016). Why Some Diversity Thinkers Aren't Buying The Tech Industry's Excuses. NPR. Retrieved 24 November 2021, from https://www.npr.org/sections/alltechconsidered/2016/07/19/486511816/why-some-diversity-thinkers-arent-buying-the-tech-industrys-excuses.
  35. National Association for Educational Progress. (2020). NAEP Report Card: Mathematics. Retrieved from https://www.nationsreportcard.gov/mathematics/nation/achievement/?grade=4
  36. Ladner, R. (2021). Expanding the pipeline: The status of persons with disabilities in the Computer Science Pipeline. Retrieved February 1, 2022, from https://cra.org/cra-wp/expanding-the-pipeline-the-status-of-persons-with-disabilities-in-the-computer-science-pipeline/
  37. Center for Evaluating the Research Pipeline (2021). “Data Buddies Survey 2019 Annual Report”. Computing Research Association, Washington, D.C.
  38. Code.org. (2021). Code.org's Approach to Diversity u0026amp; Equity in Computer Science. Code.org. Retrieved 24 November 2021, from https://code.org/diversity
  39. Zweben, S., u0026amp; Bizot, B. (2021). 2020 Taulbee Survey: Bachelor’s and Doctoral Degree Production Growth Continues but New Student Enrollment Shows Declines (Ebook). Computing Research Association. Retrieved 24 November 2021, from https://cra.org/wp-content/uploads/2021/05/2020-CRA-Taulbee-Survey.pdf
  40. Computing Research Association (2017). Generation CS: Computer Science Undergraduate Enrollments Surge Since 2006
  41. The Higher Education Statistics Agency (2021). Higher Education Student Statistics. Retrieved from https://www.hesa.ac.uk/news/16-01-2020/sb255-higher-education-student-statistics/subjects
  42. BCS. (2014). Women in IT Survey (Ebook). BCS: The Chartered Institute for IT. Retrieved 24 November 2021, from https://www.bcs.org/media/4446/women-it-survey.pdf
  43. Inclusive Boards. (2018). Inclusive Tech Alliance Report 2018 (Ebook). Retrieved 24 November 2021, from https://www.inclusivetechalliance.co.uk/wp-content/uploads/2019/07/Inclusive-Tech-Alliance-Report.pdf.
  44. Atomico. (2020). The State of European Tech 2020. 2020.stateofeuropeantech.com. Retrieved 24 November 2021, from https://2020.stateofeuropeantech.com/chapter/diversity-inclusion/article/diversity-inclusion/.
  45. Chung-Yan, G. A. (2010). The nonlinear effects of job complexity and autonomy on job satisfaction, turnover, and psychological well-being. Journal of occupational health psychology, 15(3), 237.
  46. McKnight, D. H., Phillips, B., u0026amp; Hardgrave, B. C. (2009). Which reduces IT turnover intention the most: Workplace characteristics or job characteristics?. Information u0026amp; Management, 46(3), 167-174.
  47. Vaamonde, J. D., Omar, A., u0026amp; Salessi, S. (2018). From organizational justice perceptions to turnover intentions: The mediating effects of burnout and job satisfaction. Europe's journal of psychology, 14(3), 554.
  48. Instructure (2019). How to get today's employees to stay and engage? Develop their careers. PR Newswire. Retrieved from https://www.prnewswire.com/news-releases/how-to-get-todays-employees-to-stay-and-engage-develop-their-careers-300860067.html
  49. McCarty, E. (2021). Integral and The Harris Poll Find Employees are giving Employers a Performance Review - Integral. Integral. Retrieved 24 November 2021, from https://www.teamintegral.com/2021/news-release-integral-employee-activation-index/
  50. McCarty, E. (2021). Integral and The Harris Poll Find Employees are giving Employers a Performance Review - Integral. Integral. Retrieved 24 November 2021, from https://www.teamintegral.com/2021/news-release-integral-employee-activation-index/
  51. Bureau of Labor Statistics. (2021). News Release - The Employment Situation - October 2021 (Ebook). Retrieved 24 November 2021, from https://www.bls.gov/news.release/pdf/empsit.pdf
  52. Scott, A., Kapor Klein, F., u0026amp; Onovakpuri, U. (2017). Tech Leavers Study (Ebook). Retrieved 24 November 2021, from https://www.kaporcenter.org/wp-content/uploads/2017/08/TechLeavers2017.pdf.
  53. Young, E., Wajcman, J. and Sprejer, L. (2021). Where are the Women? Mapping the Gender Job Gap in AI. Policy Briefing: Full Report. The Alan Turing Institute.
  54. Metz, C. (2021). A second Google A.I. researcher says the company fired her.. Nytimes.com. Retrieved 24 November 2021, from https://www.nytimes.com/2021/02/19/technology/google-ethical-artificial-intelligence-team.html
  55. Myrow, R. (2021). Pinterest Sounds A More Contrite Tone After Black Former Employees Speak Out. Npr.org. Retrieved 24 November 2021, from https://www.npr.org/2020/06/23/881624553/pinterest-sounds-a-more-contrite-tone-after-black-former-employees-speak-out
  56. Scheer, S. (2021). The Tech Sector’s Big Disability Inclusion Problem. ERE. Retrieved from https://www.ere.net/the-tech-sectors-big-disability-inclusion-problem/
  57. Robinson, O. C. (2014). Sampling in interview-based qualitative research: A theoretical and practical guide. Qualitative research in psychology, 11(1), 25-41.
  58. Yancey, A. K., Ortega, A. N., u0026amp; Kumanyika, S. K. (2006). Effective recruitment and retention of minority research participants. Annu. Rev. Public Health, 27, 1-28.
  59. Hill, C. E., Knox, S., Thompson, B. J., Williams, E. N., Hess, S. A., u0026amp; Ladany, N. (2005). Consensual qualitative research: An update. Journal of counseling psychology, 52(2), 196.
  60. Gunaratnam, Y. (2003). Researching'race'and ethnicity: Methods, knowledge and power. Sage.
  61. Race and Ethnicity. American Sociological Association. (2022). Retrieved 29 January 2022, archived at https://web.archive.org/web/20190821170406/https://www.asanet.org/topics/race-and-ethnicity
  62. University of Minnesota Libraries (2022). 10.2 The Meaning of Race and Ethnicity. Open.lib.umn.edu. Retrieved 29 January 2022, from https://open.lib.umn.edu/sociology/chapter/10-2-the-meaning-of-race-and-ethnicity/.
  63. Sue, Derald Wing, Christina M. Capodilupo, Gina C. Torino, Jennifer M. Bucceri, Aisha Holder, Kevin L. Nadal, and Marta Esquilin.
  64. https://adata.org/glossary-terms#D
Table of Contents
1
2
3
4
5
6
7
8
9

Fairer Algorithmic Decision-Making and Its Consequences: Interrogating the Risks and Benefits of Demographic Data Collection, Use, and Non-Use

PAI Staff

Introduction and Background

Introduction

Introduction

Algorithmic decision-making has been widely accepted as a novel approach to overcoming the purported cognitive and subjective limitations of human decision makers by providing “objective” data-driven recommendations. Yet, as organizations adopt algorithmic decision-making systems (ADMS), countless examples of algorithmic discrimination continue to emerge. Harmful biases have been found in algorithmic decision-making systems in contexts such as healthcare, hiring, criminal justice, and education, prompting increasing social concern regarding the impact these systems are having on the wellbeing and livelihood of individuals and groups across society. In response, algorithmic fairness strategies attempt to understand how ADMS treat certain individuals and groups, often with the explicit purpose of detecting and mitigating harmful biases.

Many current algorithmic fairness techniques require access to data on a “sensitive attribute” or “protected category” (such as race, gender, or sexuality) in order to make performance comparisons and standardizations across groups. These demographic-based algorithmic fairness techniques assume that discrimination and social inequality can be overcome with clever algorithms and collection of the requisite data, removing broader questions of governance and politics from the equation. This paper seeks to challenge this assumption, arguing instead that collecting more data in support of fairness is not always the answer and can actually exacerbate or introduce harm for marginalized individuals and groups. We believe more discussion is needed in the machine learning community around the consequences of “fairer” algorithmic decision-making. This involves acknowledging the value assumptions and trade-offs associated with the use and non-use of demographic data in algorithmic systems. To advance this discussion, this white paper provides a preliminary perspective on these trade-offs derived from workshops and conversations with experts in industry, academia, government, and advocacy organizations as well as literature across relevant domains. In doing so, we hope that readers will better understand the affordances and limitations of using demographic data to detect and mitigate discrimination in institutional decision-making more broadly

Background

Background

Demographic-based algorithmic fairness techniques presuppose the availability of data on sensitive attributes or protected categories. However, previous research has highlighted that data on demographic categories, such as race and sexuality, are often unavailable due to a range of organizational challenges, legal barriers, and practical concerns Andrus, M., Spitzer, E., Brown, J., & Xiang, A. (2021). “What We Can’t Measure, We Can’t Understand”: Challenges to Demographic Data Procurement in the Pursuit of Fairness. ArXiv:2011.02282 (Cs). http://arxiv.org/abs/2011.02282. Some privacy laws, such as the EU’s GDPR, not only require
data subjects to provide meaningful consent when their data is collected, but also prohibit the collection of sensitive data such as race, religion, and sexuality. Some corporate privacy policies and standards, such as Privacy By Design, call for organizations to be intentional with their data collection practices, only collecting data they require and can specify a use for. Given the uncertainty around whether or not it is acceptable to ask users and customers for their sensitive demographic information, most legal and policy teams urge their corporations to err on the side of caution and not collect these types of data unless legally required to do so. As a
result, concerns over privacy often take precedence over ensuring product fairness since the trade-offs between mitigating bias and ensuring individual or group privacy are unclear Andrus et al., 2021.

In cases where sensitive demographic data can be collected, organizations must navigate a number of practical challenges throughout its procurement. For many organizations, sensitive demographic data is collected through self-reporting mechanisms. However, self reported data is often incomplete, unreliable, and unrepresentative, due in part to a lack of incentives for individuals to provide accurate
and full information Andrus et al., 2021. In some cases, practitioners choose to infer protected categories of individuals based on proxy information, a method which is largely inaccurate. Organizations also face difficulty capturing unobserved characteristics, such as disability, sexuality, and religion, as these categories are frequently missing and often unmeasurable Tomasev, N., McKee, K. R., Kay, J., & Mohamed, S. (2021). Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities. ArXiv:2102.04257 (Cs). https://doi.org/10.1145/3461702.3462540. Overall, deciding on how to classify and categorize demographic data is an ongoing challenge, as demographic categories continue to shift and change over time and between contexts. Once demographic data is collected, antidiscrimination law and policies largely inhibit organizations from using this data since knowledge of sensitive categories opens the door to legal liability if discrimination is uncovered without a plan to successfully mitigate it Andrus et al., 2021.

In the face of these barriers, corporations looking to apply demographic-based algorithmic fairness techniques have called for guidance on how to responsibly collect and use demographic data. However, prescribing statistical definitions of fairness on algorithmic systems without accounting for the social, economic, and political systems in which they are embedded can fail to benefit marginalized
groups and undermine fairness efforts Bakalar, C., Barreto, R., Bogen, M., Corbett-Davies, S., Hall, M., Kloumann, I., Lam, M., Candela, J. Q., Raghavan, M., Simons, J., Tannen, J., Tong, E., Vredenburgh, K., & Zhao, J. (2021). Fairness On The Ground: Applying Algorithmic Fairness Approaches To Production Systems. 12.. Therefore, developing guidance requires a deeper understanding of the risks and trade-offs inherent to the use and non-use of demographic data. Efforts to detect and mitigate harms must account for the wider contexts and power structures that algorithmic systems, and the data that they draw on, are embedded in.

Finally, though this work is motivated by the documented unfairness of ADMS, it is critical to recognize that bias and discrimination are not the only possible harms stemming directly from ADMS. As recent papers and reports have forcefully argued, focusing on debiasing datasets and algorithms is (1) often misguided because proposed debiasing methods are only relevant for a subset of the kinds of bias ADMS introduce or reinforce, and (2) likely to draw attention away from other, possibly more salient harms Balayn, A., & Gürses, S. (2021). Beyond Debiasing. European Digital Rights. https://edri.org/wp-content/ uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf. In the first case, harms from tools such as recommendation systems, content moderation systems, and computer vision systems might be characterized as a result of various forms of bias, but resolving bias in those systems generally involves adding in more context to better understand differences between groups, not just trying to treat groups more similarly. In the second case, there are many ADMS that are clearly susceptible to bias, yet the greater source of harm could arguably be the deployment of the system in the first place. Pre-trial detention risk scores provide one such example. Using statistical correlations to determine if someone should be held without bail, or, in other words, potentially punishing individuals for attributes outside of their control and past decisions unrelated to what they are currently being charged for, is itself a significant deviation from legal standards and norms, yet most of the debate has focused around how biased the predictions are. Attempting to collect demographic data in these cases will likely do more harm than good, as demographic data will
draw attention away from harms inherent to the system and towards seemingly resolvable issues around bias.

Fairer Algorithmic Decision-Making and Its Consequences: Interrogating the Risks and Benefits of Demographic Data Collection, Use, and Non-Use

Introduction and Background

Introduction

Background

Social Risks of Non-Use

Hidden Discrimination

''Colorblind'' Decision-Making

Invisibility to Institutions of Importance

Social Risks of Use

Risks to Individuals

Encroachments on Privacy and Personal Life

Individual Misrepresentation

Data Misuse and Use Beyond Informed Consent

Risks to Communities

Expanding Surveillance Infrastructure in the Pursuit of Fairness

Misrepresentation and Reinforcing Oppressive or Overly Prescriptive Categories

Private Control Over Scoping Bias and Discrimination

Conclusion and Acknowledgements

Conclusion

Acknowledgements

Sources Cited

  1. Andrus, M., Spitzer, E., Brown, J., & Xiang, A. (2021). “What We Can’t Measure, We Can’t Understand”: Challenges to Demographic Data Procurement in the Pursuit of Fairness. ArXiv:2011.02282 (Cs). http://arxiv.org/abs/2011.02282
  2. Andrus et al., 2021
  3. Andrus et al., 2021
  4. Tomasev, N., McKee, K. R., Kay, J., & Mohamed, S. (2021). Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities. ArXiv:2102.04257 (Cs). https://doi.org/10.1145/3461702.3462540
  5. Andrus et al., 2021
  6. Bakalar, C., Barreto, R., Bogen, M., Corbett-Davies, S., Hall, M., Kloumann, I., Lam, M., Candela, J. Q., Raghavan, M., Simons, J., Tannen, J., Tong, E., Vredenburgh, K., & Zhao, J. (2021). Fairness On The Ground: Applying Algorithmic Fairness Approaches To Production Systems. 12.
  7. Balayn, A., & Gürses, S. (2021). Beyond Debiasing. European Digital Rights. https://edri.org/wp-content/ uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf
  8. Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder‐Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., … Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. WIREs Data Mining and Knowledge Discovery, 10(3). https://doi.org/10.1002/widm.1356
  9. Olteanu, A., Castillo, C., Diaz, F., & Kıcıman, E. (2019). Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Frontiers in Big Data, 2, 13. https://doi.org/10.3389/fdata.2019.00013
  10. Rimfeld, K., & Malanchini, M. (2020, August 21). The A-Level and GCSE scandal shows teachers should be trusted over exams results. Inews.Co.Uk. https://inews.co.uk/opinion/a-level-gcse-results-trust-teachers-exams-592499
  11. Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated Hate Speech Detection and the Problem of Offensive Language. Proceedings of the International AAAI Conference on Web and Social Media, 11(1), 512–515.
  12. Davidson, T., Bhattacharya, D., & Weber, I. (2019). Racial Bias in Hate Speech and Abusive Language Detection Datasets. ArXiv:1905.12516 (Cs). http://arxiv.org/abs/1905.12516
  13. Bogen, M., Rieke, A., & Ahmed, S. (2020). Awareness in Practice: Tensions in Access to Sensitive Attribute Data for Antidiscrimination. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 492–500. https://doi.org/10.1145/3351095.3372877
  14. Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. (2021, January 21). The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/
  15. Executive Order on Diversity, Equity, Inclusion, and Accessibility in the Federal Workforce. (2021, June 25). The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/06/25/executive-order-on-diversity-equity-inclusion-and-accessibility-in-the-federal-workforce/
  16. Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual Fairness. Advances in Neural Information Processing Systems, 30. https://papers.nips.cc/paper/2017/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html
  17. Harned, Z., & Wallach, H. (2019). Stretching human laws to apply to machines: The dangers of a ’Colorblind’ Computer. Florida State University Law Review, Forthcoming.
  18. Washington, A. L. (2018). How to Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate. Colorado Technology Law Journal, 17, 131.
  19. Rodriguez, L. (2020). All Data Is Not Credit Data: Closing the Gap Between the Fair Housing Act and Algorithmic Decisionmaking in the Lending Industry. Columbia Law Review, 120(7), 1843–1884.
  20. Hu, L. (2021, February 22). Law, Liberation, and Causal Inference. LPE Project. https://lpeproject.org/blog/law-liberation-and-causal-inference/
  21. Bonilla-Silva, E. (2010). Racism Without Racists: Color-blind Racism and the Persistence of Racial Inequality in the United States. Rowman & Littlefield.
  22. Plaut, V. C., Thomas, K. M., Hurd, K., & Romano, C. A. (2018). Do Color Blindness and Multiculturalism Remedy or Foster Discrimination and Racism? Current Directions in Psychological Science, 27(3), 200–206. https://doi.org/10.1177/0963721418766068
  23. Eubanks, V. (2017). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press
  24. Banco, E., & Tahir, D. (2021, March 9). CDC under scrutiny after struggling to report Covid race, ethnicity data. POLITICO. https://www.politico.com/news/2021/03/09/hhs-cdc-covid-race-data-474554
  25. Banco, E., & Tahir, D. (2021, March 9). CDC under scrutiny after struggling to report Covid race, ethnicity data. POLITICO. https://www.politico.com/news/2021/03/09/hhs-cdc-covid-race-data-474554
  26. Elliott, M. N., Morrison, P. A., Fremont, A., McCaffrey, D. F., Pantoja, P., & Lurie, N. (2009). Using the Census Bureau’s surname list to improve estimates of race/ethnicity and associated disparities. Health Services and Outcomes Research Methodology, 9(2), 69.
  27. Shimkhada, R., Scheitler, A. J., & Ponce, N. A. (2021). Capturing Racial/Ethnic Diversity in Population-Based Surveys: Data Disaggregation of Health Data for Asian American, Native Hawaiian, and Pacific Islanders (AANHPIs). Population Research and Policy Review, 40(1), 81–102. https://doi.org/10.1007/s11113-020-09634-3
  28. Poon, O. A., Dizon, J. P. M., & Squire, D. (2017). Count Me In!: Ethnic Data Disaggregation Advocacy, Racial Mattering, and Lessons for Racial Justice Coalitions. JCSCORE, 3(1), 91–124. https://doi.org/10.15763/issn.2642-2387.2017.3.1.91-124
  29. Fosch-Villaronga, E., Poulsen, A., Søraa, R. A., & Custers, B. H. M. (2021). A little bird told me your gender: Gender inferences in social media. Information Processing & Management, 58(3), 102541. https://doi.org/10.1016/j.ipm.2021.102541
  30. Browne, S. (2015). Dark Matters: On the Surveillance of Blackness. In Dark Matters. Duke University Press. https://doi.org/10.1515/9780822375302
  31. Eubanks, 2017
  32. Farrand, T., Mireshghallah, F., Singh, S., & Trask, A. (2020). Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy. Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, 15–19. https://doi.org/10.1145/3411501.3419419
  33. Jagielski, M., Kearns, M., Mao, J., Oprea, A., Roth, A., Sharifi -Malvajerdi, S., & Ullman, J. (2019). Differentially Private Fair Learning. Proceedings of the 36th International Conference on Machine Learning, 3000–3008. https://bit.ly/3rmhET0
  34. Kuppam, S., Mckenna, R., Pujol, D., Hay, M., Machanavajjhala, A., & Miklau, G. (2020). Fair Decision Making using Privacy-Protected Data. ArXiv:1905.12744 (Cs). http://arxiv.org/abs/1905.12744
  35. Quillian, L., Pager, D., Hexel, O., & Midtbøen, A. H. (2017). Meta-analysis of field experiments shows no change in racial discrimination in hiring over time. Proceedings of the National Academy of Sciences, 114(41), 10870–10875. https://doi.org/10.1073/pnas.1706255114
  36. Quillian, L., Lee, J. J., & Oliver, M. (2020). Evidence from Field Experiments in Hiring Shows Substantial Additional Racial Discrimination after the Callback. Social Forces, 99(2), 732–759. https://doi.org/10.1093/sf/soaa026
  37. Cabañas, J. G., Cuevas, Á., Arrate, A., & Cuevas, R. (2021). Does Facebook use sensitive data for advertising purposes? Communications of the ACM, 64(1), 62–69. https://doi.org/10.1145/3426361
  38. Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112. https://doi.org/10.1515/popets-2015-0007
  39. Hupperich, T., Tatang, D., Wilkop, N., & Holz, T. (2018). An Empirical Study on Online Price Differentiation. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy, 76–83. https://doi.org/10.1145/3176258.3176338
  40. Mikians, J., Gyarmati, L., Erramilli, V., & Laoutaris, N. (2013). Crowd-assisted search for price discrimination in e-commerce: First results. Proceedings of the Ninth ACM Conference on Emerging Networking Experiments and Technologies, 1–6. https://doi.org/10.1145/2535372.2535415
  41. Cabañas et al., 2021
  42. Leetaru, K. (2018, July 20). Facebook As The Ultimate Government Surveillance Tool? Forbes. https://www.forbes.com/sites/kalevleetaru/2018/07/20/facebook-as-the-ultimate-government-surveillance-tool/
  43. Rozenshtein, A. Z. (2018). Surveillance Intermediaries (SSRN Scholarly Paper ID 2935321). Social Science Research Network. https://papers.ssrn.com/abstract=2935321
  44. Rocher, L., Hendrickx, J. M., & de Montjoye, Y.-A. (2019). Estimating the success of re-identifications in incomplete datasets using generative models. Nature Communications, 10(1), 3069. https://doi.org/10.1038/s41467-019-10933-3
  45. Cummings, R., Gupta, V., Kimpara, D., & Morgenstern, J. (2019). On the Compatibility of Privacy and Fairness. Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization - UMAP’19 Adjunct, 309–315. https://doi.org/10.1145/3314183.3323847
  46. Kuppam et al., 2020
  47. Mavriki, P., & Karyda, M. (2019). Automated data-driven profiling: Threats for group privacy. Information & Computer Security, 28(2), 183–197. https://doi.org/10.1108/ICS-04-2019-0048
  48. Barocas, S., & Levy, K. (2019). Privacy Dependencies (SSRN Scholarly Paper ID 3447384). Social Science Research Network. https://papers.ssrn.com/abstract=3447384
  49. Bivens, R. (2017). The gender binary will not be deprogrammed: Ten years of coding gender on Facebook. New Media & Society, 19(6), 880–898. https://doi.org/10.1177/1461444815621527
  50. Mittelstadt, B. (2017). From Individual to Group Privacy in Big Data Analytics. Philosophy & Technology, 30(4), 475–494. https://doi.org/10.1007/s13347-017-0253-7
  51. Taylor, 2021
  52. Draper and Turow, 2019
  53. Hanna, A., Denton, E., Smart, A., & Smith-Loud, J. (2020). Towards a Critical Race Methodology in Algorithmic Fairness. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 501–512. https://doi.org/10.1145/3351095.3372826
  54. Keyes, O., Hitzig, Z., & Blell, M. (2021). Truth from the machine: Artificial intelligence and the materialization of identity. Interdisciplinary Science Reviews, 46(1–2), 158–175. https://doi.org/10.1080/03080188.2020.1840224
  55. Scheuerman, M. K., Wade, K., Lustig, C., & Brubaker, J. R. (2020). How We’ve Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1), 1–35. https://doi.org/10.1145/3392866
  56. Roth, W. D. (2016). The multiple dimensions of race. Ethnic and Racial Studies, 39(8), 1310–1338. https://doi.org/10.1080/01419870.2016.1140793
  57. Hanna et al., 2020
  58. Keyes, O. (2018). The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 88:1-88:22. https://doi.org/10.1145/3274357
  59. Keyes, O. (2019, April 8). Counting the Countless. Real Life. https://reallifemag.com/counting-the-countless/
  60. Keyes, O., Hitzig, Z., & Blell, M. (2021). Truth from the machine: Artificial intelligence and the materialization of identity. Interdisciplinary Science Reviews, 46(1–2), 158–175. https://doi.org/10.1080/03080188.2020.1840224
  61. Scheuerman et al., 2020
  62. Scheuerman et al., 2020
  63. Stark, L., & Hutson, J. (2021). Physiognomic Artificial Intelligence (SSRN Scholarly Paper ID 3927300). Social Science Research Network. https://doi.org/10.2139/ssrn.3927300
  64. U.S. Department of Justice. (2019). The First Step Act of 2018: Risk and Needs Assessment System. Office of the Attorney General.
  65. Partnership on AI. (2020). Algorithmic Risk Assessment and COVID-19: Why PATTERN Should Not Be Used. Partnership on AI. http://partnershiponai.org/wp-content/uploads/2021/07/Why-PATTERN-Should-Not-Be-Used.pdf
  66. Hill, K. (2020, January 18). The Secretive Company That Might End Privacy as We Know It. The New York Times. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
  67. Porter, J. (2020, February 6). Facebook and LinkedIn are latest to demand Clearview stop scraping images for facial recognition tech. The Verge. https://www.theverge.com/2020/2/6/21126063/facebook-clearview-ai-image-scraping-facial-recognition-database-terms-of-service-twitter-youtube
  68. Regulation (EU) 2016/679 (General Data Protection Regulation), (2016) (testimony of European Parliament and Council of European Union). https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN
  69. Obar, J. A. (2020). Sunlight alone is not a disinfectant: Consent and the futility of opening Big Data black boxes (without assistance). Big Data & Society, 7(1), 2053951720935615. https://doi.org/10.1177/2053951720935615
  70. Obar, J. A. (2020). Sunlight alone is not a disinfectant: Consent and the futility of opening Big Data black boxes (without assistance). Big Data & Society, 7(1), 2053951720935615. https://doi.org/10.1177/2053951720935615
  71. Obar, 2020
  72. Angwin, J., & Parris, T. (2016, October 28). Facebook Lets Advertisers Exclude Users by Race. ProPublica. https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race
  73. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.
  74. Browne, S. (2015). Dark Matters: On the Surveillance of Blackness. In Dark Matters. Duke University Press. https://doi.org/10.1515/9780822375302
  75. Eubanks, V. (2017). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
  76. Hoffmann, 2020
  77. Rainie, S. C., Kukutai, T., Walter, M., Figueroa-Rodríguez, O. L., Walker, J., & Axelsson, P. (2019). Indigenous data sovereignty.
  78. Ricaurte, P. (2019). Data Epistemologies, Coloniality of Power, and Resistance. Television & New Media, 16.
  79. Walter, M. (2020, October 7). Delivering Indigenous Data Sovereignty. https://www.youtube.com/watch?v=NCsCZJ8ugPA
  80. See, for example: Bowker, G. C., & Star, S. L. (1999). Sorting things out: Classification and its consequences. MIT Press.
  81. See, for example: Dembroff, R. (2018). Real Talk on the Metaphysics of Gender. Philosophical Topics, 46(2), 21–50. https://doi.org/10.5840/philtopics201846212
  82. See, for example: Hacking, I. (1995). The looping effects of human kinds. In Causal cognition: A multidisciplinary debate (pp. 351–394). Clarendon Press/Oxford University Press.
  83. See, for example: Hanna et al., 2020
  84. See, for example: Hu, L., & Kohler-Hausmann, I. (2020). What’s Sex Got to Do With Fair Machine Learning? 11.
  85. See, for example: Keyes (2019)
  86. See, for example: Zuberi, T., & Bonilla-Silva, E. (2008). White Logic, White Methods: Racism and Methodology. Rowman & Littlefield Publishers.
  87. Hanna et al., 2020
  88. Andrus et al., 2021
  89. Bivens, 2017
  90. Hamidi, F., Scheuerman, M. K., & Branham, S. M. (2018). Gender Recognition or Gender Reductionism?: The Social Implications of Embedded Gender Recognition Systems. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18, 1–13. https://doi.org/10.1145/3173574.3173582
  91. Keyes, 2018
  92. Keyes, 2021
  93. Fu, S., & King, K. (2021). Data disaggregation and its discontents: Discourses of civil rights, efficiency and ethnic registry. Discourse: Studies in the Cultural Politics of Education, 42(2), 199–214. https://doi.org/10.1080/01596306.2019.1602507
  94. Poon et al., 2017
  95. Hanna et al., 2020
  96. Saperstein, A. (2012). Capturing complexity in the United States: Which aspects of race matter and when? Ethnic and Racial Studies, 35(8), 1484–1502. https://doi.org/10.1080/01419870.2011.607504
  97. Keyes, 2019
  98. Ruberg, B., & Ruelos, S. (2020). Data for queer lives: How LGBTQ gender and sexuality identities challenge norms of demographics. Big Data & Society, 7(1), 2053951720933286. https://doi.org/10.1177/2053951720933286
  99. Tomasev et al., 2021
  100. Pauker et al., 2018
  101. Ruberg & Ruelos, 2020
  102. Braun, L., Fausto-Sterling, A., Fullwiley, D., Hammonds, E. M., Nelson, A., Quivers, W., Reverby, S. M., & Shields, A. E. (2007). Racial Categories in Medical Practice: How Useful Are They? PLOS Medicine, 4(9), e271. https://doi.org/10.1371/journal.pmed.0040271
  103. Hanna et al., 2020
  104. Morning, A. (2014). Does Genomics Challenge the Social Construction of Race?: Sociological Theory. https://doi.org/10.1177/0735275114550881
  105. Barabas, C. (2019). Beyond Bias: Re-Imagining the Terms of ‘Ethical AI’ in Criminal Law. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3377921
  106. Barabas, 2019
  107. Hacking, 1995
  108. Hacking, 1995
  109. Dembroff, 2018
  110. Andrus et al., 2021
  111. Holstein, K., Vaughan, J. W., Daumé III, H., Dudík, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19, 1–16. https://doi.org/10.1145/3290605.3300830
  112. Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2021). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices. ArXiv:2006.12358 (Cs). https://doi.org/10.1145/3449081
  113. Wachter, S., Mittelstadt, B., & Russell, C. (2021). Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI. Computer Law & Security Review, 41. https://doi.org/10.2139/ssrn.3547922
  114. Xenidis, R. (2021). Tuning EU Equality Law to Algorithmic Discrimination: Three Pathways to Resilience. Maastricht Journal of European and Comparative Law, 27, 1023263X2098217. https://doi.org/10.1177/1023263X20982173
  115. Xiang, A. (2021). Reconciling legal and technical approaches to algorithmic bias. Tennessee Law Review, 88(3).
  116. Balayn & Gürses, 2021
  117. Fazelpour, S., & Lipton, Z. C. (2020). Algorithmic Fairness from a Non-ideal Perspective. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 57–63. https://doi.org/10.1145/3375627.3375828
  118. Green & Viljoen, 2020
  119. Green, B., & Viljoen, S. (2020). Algorithmic realism: Expanding the boundaries of algorithmic thought. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 19–31. https://doi.org/10.1145/3351095.3372840
  120. Gitelman, L. (2013). Raw Data Is an Oxymoron. MIT Press.
  121. Barabas, C., Doyle, C., Rubinovitz, J., & Dinakar, K. (2020). Studying Up: Reorienting the study of algorithmic fairness around issues of power. 10.
  122. Crooks, R., & Currie, M. (2021). Numbers will not save us: Agonistic data practices. The Information Society, 0(0), 1–19. https://doi.org/10.1080/01972243.2021.1920081
  123. Muhammad, K. G. (2019). The Condemnation of Blackness: Race, Crime, and the Making of Modern Urban America, With a New Preface. Harvard University Press.
  124. Ochigame, R., Barabas, C., Dinakar, K., Virza, M., & Ito, J. (2018). Beyond Legitimation: Rethinking Fairness, Interpretability, and Accuracy in Machine Learning. International Conference on Machine Learning, 6.
  125. Ochigame et al., 2018
  126. Basu, S., Berman, R., Bloomston, A., Cambell, J., Diaz, A., Era, N., Evans, B., Palkar, S., & Wharton, S. (2020). Measuring discrepancies in Airbnb guest acceptance rates using anonymized demographic data. AirBnB. https://news.airbnb.com/wp-content/uploads/sites/4/2020/06/Project-Lighthouse-Airbnb-2020-06-12.pdf
Table of Contents
1
2
3
4
5
6

ABOUT ML Reference Document

Last Updated

To share your ideas, suggestions, and other feedback related to this evolving document, please reach out to Sarah Villeneuve, Lead of Fairness, Transparency, Accountability & ABOUT ML. Learn more about the origins of ABOUT ML and contributors to the project here.

Section 0: How to Use this Document

Section 0: How to Use This Document

This ABOUT ML Reference Document is a reference and foundational resource. Future contributions of the ABOUT ML work will include a PLAYBOOK of specifications, guides, recommendations, templates, and other meaningful artifacts to support ML documentation work by individuals in any and all of the roles listed below. Use cases made up of various artifacts from the PLAYBOOK along with other implementation instructions will be packaged as PILOTS for PAI Partners to try out in their organizations. Feedback from their use of these cases will further mature the artifacts in the PLAYBOOK and will support the ABOUT ML team’s continued, rigorous, scientific investigation of relevant research questions in the ML documentation space.

Recommended Reading Plan

Recommended Reading Plan

Based on the role a reader plays in their organization and/or the community of stakeholders they belong to, there are several different approaches for reading and using the information in this ABOUT ML Reference Document:

Role Recommendations
ML system developers/deployers ML system developers/deployers are encouraged to do a deep dive exploration of Section 3: Preliminary Synthesized Documentation Suggestions and use it to highlight gaps in their current understanding of both data- and model-related documentation and planning needs. This group will most benefit from further participation in the ABOUT ML effort by engaging with the community in the forthcoming online forum and by testing the efficacy and applicability of templates and specifications to be published in the PLAYBOOK and PILOTS, which will be developed based on use cases as an opportunity to implement ML documentation processes within an organization.
ML system procurers ML system procurers might explore Section 2.2: Documentation to Operationalize AI Ethics Goals to get ideas about what concepts to include as requirements for models and data in future requests for proposals relevant to ML systems. Additionally, they could use Section 2.3: Research Themes on Documentation for Transparency to shape conversations with the business owners and requirements writers to further elicit detailed key performance indicators and measures for success for any procured ML systems.
Users of ML system APIs and/or experienced end users of ML systems Users of ML system APIs and/or experienced end users of ML systems might skim the document and review all of the coral-colored Quick Guides to get a better understanding of how ML concepts are relevant to many of the tools they regularly use. A review of Section  2.1: Demand for Transparency and AI Ethics in ML systems will provide insight into conditions where it is appropriate to use ML systems. This section also explains how transparency is a foundation for both internal accountability among the developers, deployers, and API users of an ML system and external accountability to customers, impacted non-users, civil society organizations, and policymakers.
Internal compliance teams Internal compliance teams are encouraged to explore Section 4: Current Challenges of Implementing Documentation and use it to shape conversations with developer/deployment teams to find ways to measure compliance throughout the Machine Learning Lifecycle (MLLC).
External auditors External auditors could skim Appendix: Compiled List of Documentation Questions and familiarize themselves with high-level concepts as well as tactically operationalized tenets to look for in their determination of whether or not an ML System is well-documented.
Lay users of ML systems and/or members of low-income communities Lay users of ML systems and/or members of low-income communities might skim the document and review all of the blue-colored How We Define boxes in order to get an overarching understanding of the text’s contents. These users are encouraged to continue learning ABOUT ML systems by exploring how they might impact their everyday lives. Additional insights can be gathered from the Glossary section of this Reference Document.

Quick Guides

Quick Guides

Example

More information about a topic. Oftentimes, this will be a high-level and less academic expression of a term or concept.

Throughout this ABOUT ML Reference Document, we will use coral callout boxes with text to further explain a concept. This is a readability enhancement tactic recommended by our Diverse Voices panel and is meant to make the content more accessible and consumable to lay users of machine learning systems.

How We Define

How We Define

Example Term

We’ll use this space to give background definitions of terms and phrases and, in some cases, to call out existing work related to the ABOUT ML effort.

Throughout this ABOUT ML Reference Document, we will use the blue callout boxes with text to showcase our accepted (near-consensus) definition of a term or phrase. This is meant to give foundational background information to viewers of the document and also provides a baseline of understanding for any artifacts that may be derived from this work. Additional terms can be found in the glossary section. Future versions of this reference and/or artifacts in the forthcoming PLAYBOOK will explore audio/video offerings to support the consumption of this information by verbal/visual learners.

Contact for Support

Contact for Support

If you have any questions or would like to learn more about this effort, please reach out to us by:

Visiting our ABOUT ML page to make contributions to the work

ABOUT ML Reference Document

Section 0: How to Use this Document

Recommended Reading Plan

Quick Guides

How We Define

Contact for Support

Section 1: Project Overview

1.1 Statement of Importance for ABOUT ML Project

1.1.0 Importance of Transparency: Why a Company Motivated by the Bottom Line Should Adopt ABOUT ML Recommendations

1.1.1 About This Document and Version Numbering

1.1.2 ABOUT ML Goals and Plan

1.1.3 ABOUT ML Project Process and Timeline Overview

1.1.4 Who Is This Project For?

1.1.4.1 Audiences for the ABOUT ML Resources

1.1.4.2 Stakeholders That Should Be Consulted While Putting Together ABOUT ML Resources

1.1.4.3 Audiences for ABOUT ML Documentation Artifacts

1.1.4.4 Whose Voices Are Currently Reflected in ABOUT ML?

1.1.4.5 Origin Story

Section 2: Literature Review (Current Recommendations on Documentation for Transparency in the ML Lifecycle)

2.1 Demand for Transparency and AI Ethics in ML Systems 

2.2 Documentation to Operationalize AI Ethics Goals

2.2.1 Documentation as a Process in the ML Lifecycle

2.2.2 Key Process Considerations for Documentation

2.3 Research Themes on Documentation for Transparency 

2.3.1 System Design and Set Up

2.3.2 System Development

2.3.3 System Deployment

Section 3: Preliminary Synthesized Documentation Suggestions

3.4.1 Suggested Documentation Sections for Datasets

3.4.1.1 Data Specification

3.4.1.1.1 Motivation

3.4.1.2 Data Curation 

3.4.1.2.1 Collection

3.4.1.2.2 Processing

3.4.1.2.3 Composition

3.4.1.2.4 Types and Sources of Judgement Calls

3.4.1.3 Data Integration

3.4.1.3.1 Use

3.4.1.3.2 Distribution

3.4.1.4 Maintenance

3.4.2 Suggested Documentation Sections for Models

3.4.2.1 Model Specifications

3.4.2.2 Model Training

3.4.2.3 Evaluation

3.4.2.4 Model Integration

3.4.2.5 Maintenance

Section 4: Current Challenges of Implementing Documentation

Section 5: Conclusions

Version 0

Version 1

Appendix A: Compiled List of Documentation Questions 

Fact Sheets (Arnold et al. 2018)

Data Sheets (Gebru et al. 2018)

Model Cards (Mitchell et al. 2018)

A “Nutrition Label” for Privacy (Kelley et al. 2009)

The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards (Holland et al. 2019)

Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science (Bender and Friedman 2018)

Appendix B: Diverse Voices Process and Artifacts

Procurement Recruitment Email

Procurement Confirmation Email 

Appendix C: Glossary

Sources Cited

  1. Holstein, K., Vaughan, J.W., Daumé, H., Dudík, M., u0026amp; Wallach, H.M. (2018). Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? CHI.
  2. Young, M., Magassa, L. and Friedman, B. (2019) Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology 21(2), 89-103.
  3. World Wide Web Consortium Process Document (W3C) process outlined here: https://www.w3.org/2019/Process-20190301/
  4. Internet Engineering Task Force (IETF) process outlined here: https://www.ietf.org/standards/process/
  5. The Web Hypertext Application Technology Working Group (WHATWG) process outlined here: https://whatwg.org/faq#process
  6. Oever, N., Moriarty, K. The Tao of IETF: A novice's guide to the Internet Engineering Task Force. https://www.ietf.org/about/participate/tao/.
  7. Young, M., Magassa, L. and Friedman, B. (2019) Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology 21(2), 89-103.
  8. Friedman, B, Kahn, Peter H., and Borning, A., (2008) Value sensitive design and information systems. In Kenneth Einar Himma and Herman T. Tavani (Eds.) The Handbook of Information and Computer Ethics., (pp. 70-100) John Wiley u0026amp; Sons, Inc. http://jgustilo.pbworks.com/f/the-handbook-of-information-and-computer-ethics.pdf#page=104; Davis, J., and P. Nathan, L. (2015). Value sensitive design: applications, adaptations, and critiques. Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains. (pp. 11-40) DOI: 10.1007/978-94-007-6970-0_3. https://www.researchgate.net/publication/283744306_Value_Sensitive_Design_Applications_Adaptations_and_Critiques; Borning, A. and Muller, M. (2012). Next steps for value sensitive design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). (pp 1125-1134) DOI: https://doi.org/10.1145/2207676.2208560 https://dl.acm.org/citation.cfm?id=2208560
  9. Pichai, S., (2018). AI at Google: our principles. The Keyword. https://www.blog.google/technology/ai/ai-principles/; IBM’s Principles for Trust and Transparency. IBM Policy. https://www.ibm.com/blogs/policy/trust-principles/; Microsoft AI principles. Microsoft. https://www.microsoft.com/en-us/ai/our-approach-to-ai; Ethically Aligned Design – Version II. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
  10. Zeng, Y., Lu, E., and Huangfu, C. (2018) Linking artificial intelligence principles. CoRR https://arxiv.org/abs/1812.04814.
  11. essica Fjeld, Hannah Hilligoss, Nele Achten, Maia Levy Daniel, Sally Kagay, and Joshua Feldman, (2018). Principled artificial intelligence - a map of ethical and rights based approaches, Berkman Center for Internet and Society, https://ai-hr.cyber.harvard.edu/primp-viz.html
  12. Jobin, A., Ienca, M., u0026amp; Vayena, E. (2019). Artificial Intelligence: the global landscape of ethics guidelines. arXiv preprint arXiv:1906.11668. https://arxiv.org/pdf/1906.11668.pdf
  13. Jobin, A., Ienca, M., u0026amp; Vayena, E. (2019). Artificial Intelligence: the global landscape of ethics guidelines. arXiv preprint arXiv:1906.11668. https://arxiv.org/pdf/1906.11668.pdf
  14. Ananny, M., and Kate Crawford (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society 20 (3): 973-989.
  15. Whittlestone, J., Nyrup, R., Alexandrova, A., u0026amp; Cave, S. (2019, January). The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In Proceedings of the AAAI/ACM Conference on AI Ethics and Society, Honolulu, HI, USA (pp. 27-28). http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_188.pdf; Mittelstadt, B. (2019). AI Ethics–Too Principled to Fail? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293
  16. Greene, D., Hoffmann, A. L., u0026amp; Stark, L. (2019, January). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/handle/10125/59651
  17. Raji, I. D., u0026amp; Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In AAAI/ACM Conf. on AI Ethics and Society (Vol. 1). https://www.media.mit.edu/publications/actionable-auditing-investigating-the-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products/
  18. Algorithmic Impact Assessment (2019) Government of Canada https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/algorithmic-impact-assessment.html
  19. Benjamin, M., Gagnon, P., Rostamzadeh, N., Pal, C., Bengio, Y., u0026amp; Shee, A. (2019). Towards Standardization of Data Licenses: The Montreal Data License. arXiv preprint arXiv:1903.12262. https://arxiv.org/abs/1903.12262; Responsible AI Licenses v0.1. RAIL: Responsible AI Licenses. https://www.licenses.ai/ai-licenses
  20. See Citation 5
  21. Safe Face Pledge. https://www.safefacepledge.org/; Montreal Declaration on Responsible AI. Universite de Montreal. https://www.montrealdeclaration-responsibleai.com/; The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems. (2018). Amnesty International and Access Now. https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf ; Dagsthul Declaration on the application of machine learning and artificial intelligence for social good. https://www.dagstuhl.de/fileadmin/redaktion/Programm/Seminar/19082/Declaration/Declaration.pdf
  22. Dobbe, R., Dean, S., Gilbert, T., u0026amp; Kohli, N. (2018). A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. https://arxiv.org/pdf/1807.00553.pdf
  23. Wagstaff, K. (2012). Machine learning that matters. https://arxiv.org/pdf/1206.4656.pdf ; Friedman, B., Kahn, P. H., Borning, A., u0026amp; Huldtgren, A. (2013). Value sensitive design and information systems. In Early engagement and new technologies: Opening up the laboratory (pp. 55-95). Springer, Dordrecht. https://vsdesign.org/publications/pdf/non-scan-vsd-and-information-systems.pdf
  24. Dobbe, R., Dean, S., Gilbert, T., u0026amp; Kohli, N. (2018). A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. https://arxiv.org/pdf/1807.00553.pdf
  25. Safe Face Pledge. https://www.safefacepledge.org/
  26. Montreal Declaration on Responsible AI. Universite de Montreal. https://www.montrealdeclaration-responsibleai.com/
  27. Diverse Voices How To Guide. Tech Policy Lab, University of Washington. https://techpolicylab.uw.edu/project/diverse-voices/
  28. Bender, E. M., u0026amp; Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6, 587-604.
  29. Ethically Aligned Design – Version II. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
  30. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumeé III, H., u0026amp; Crawford, K. (2018). Datasheets for datasets. https://arxiv.org/abs/1803.09010 https://arxiv.org/abs/1803.09010; Hazard Communication Standard: Safety Data Sheets. Occupational Safety and Health Administration, US Department of Labor. https://www.osha.gov/Publications/OSHA3514.html
  31. Holland, S., Hosny, A., Newman, S., Joseph, J., u0026amp; Chmielinski, K. (2018). The dataset nutrition label: A framework to drive higher data quality standards. https://arxiv.org/abs/1805.03677; Kelley, P. G., Bresee, J., Cranor, L. F., u0026amp; Reeder, R. W. (2009). A nutrition label for privacy. In Proceedings of the 5th Symposium on Usable Privacy and Security (p. 4). ACM. http://cups.cs.cmu.edu/soups/2009/proceedings/a4-kelley.pdf
  32. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... u0026amp; Gebru, T. (2019, January). Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220-229). ACM. https://arxiv.org/abs/1810.03993
  33. Hind, M., Mehta, S., Mojsilovic, A., Nair, R., Ramamurthy, K. N., Olteanu, A., u0026amp; Varshney, K. R. (2018). Increasing Trust in AI Services through Supplier's Declarations of Conformity. https://arxiv.org/abs/1808.07261
  34. Veale M., Van Kleek M., u0026amp; Binns R. (2018) ‘Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making’ in Proceedings of the ACM Conference on Human Factors in Computing Systems, CHI 2018. https://arxiv.org/abs/1802.01029.
  35. Benjamin, M., Gagnon, P., Rostamzadeh, N., Pal, C., Bengio, Y., u0026amp; Shee, A. (2019). Towards Standardization of Data Licenses: The Montreal Data License. https://arxiv.org/abs/1903.12262
  36. Cooper, D. M. (2013, April). A Licensing Approach to Regulation of Open Robotics. In Paper for presentation for We Robot: Getting down to business conference, Stanford Law School.
  37. Responsible AI Practices. Google AI. https://ai.google/education/responsible-ai-practices
  38. Everyday Ethics for Artificial Intelligence. (2019). IBM. https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf
  39. Federal Trade Commission. (2012). Best Practices for Common Uses of Facial Recognition Technologies (Staff Report). Federal Trade Commission, 30. https://www.ftc.gov/sites/default/files/documents/reports/facing-facts-best-practices-common-uses-facial-recognition-technologies/121022facialtechrpt.pdf
  40. Microsoft (2018). Responsible bots: 10 guidelines for developers of conversational AI. https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf
  41. Tramer, F., Atlidakis, V., Geambasu, R., Hsu, D., Hubaux, J. P., Humbert, M., ... u0026amp; Lin, H. (2017, April). FairTest: Discovering unwarranted associations in data-driven applications. In 2017 IEEE European Symposium on Security and Privacy (EuroSu0026amp;P) (pp. 401-416). IEEE. https://github.com/columbia/fairtest, https://www.mhumbert.com/publications/eurosp17.pdf
  42. Kishore Durg (2018). Testing AI: Teach and Test to raise responsible AI. Accenture Technology Blog. https://www.accenture.com/us-en/insights/technology/testing-AI
  43. Kush R. Varshney (2018). Introducing AI Fairness 360. IBM Research Blog. https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/
  44. Dave Gershgorn (2018). Facebook says it has a tool to detect bias in its artificial intelligence. Quartz. https://qz.com/1268520/facebook-says-it-has-a-tool-to-detect-bias-in-its-artificial-intelligence/
  45. James Wexler. (2018) The What-If Tool: Code-Free Probing of Machine Learning Models. Google AI Blog. https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html
  46. Miro Dudík, John Langford, Hanna Wallach, and Alekh Agarwal (2018). Machine Learning for fair decisions. Microsoft Research Blog. https://www.microsoft.com/en-us/research/blog/machine-learning-for-fair-decisions/
  47. Veale, M., Binns, R., u0026amp; Edwards, L. (2018). Algorithms that Remember: Model Inversion Attacks and Data Protection Law. Phil. Trans. R. Soc. A, 376, 20180083. https://doi.org/10/gfc63m
  48. Floridi, L. (2010, February). Information: A Very Short Introduction.
  49. Data Information Specialists Committee UK, 2007. http://www.disc-uk.org/qanda.html.
  50. Harwell, Drew. “Federal Study Confirms Racial Bias of Many Facial-Recognition Systems, Casts Doubt on Their Expanding Use.” The Washington Post, WP Company, 21 Dec. 2019, www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/
  51. Hildebrandt, M. (2019) ‘Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning’, Theoretical Inquiries in Law, 20(1) 83–121.
  52. D'Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., ... u0026amp; Sculley, D. (2020). Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395.
  53. Selinger, E. (2019). ‘Why You Can’t Really Consent to Facebook’s Facial Recognition’, One Zero. https://onezero.medium.com/why-you-cant-really-consent-to-facebook-s-facial-recognition-6bb94ea1dc8f
  54. Lum, K., u0026amp; Isaac, W. (2016). To predict and serve?. Significance, 13(5), 14-19. https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x
  55. LabelInsight (2016). “Drive Long-Term Trust u0026amp; Loyalty Through Transparency”. https://www.labelinsight.com/Transparency-ROI-Study
  56. Crawford and Paglen, https://www.excavating.ai/
  57. Geva, Mor u0026amp; Goldberg, Yoav u0026amp; Berant, Jonathan. (2019). Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets. https://arxiv.org/pdf/1908.07898.pdf
  58. Bender, E. M., u0026amp; Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6, 587-604.
  59. Desmond U. Patton et al (2017).
  60. See Cynthia Dwork et al.,
  61. Katta Spiel, Oliver L. Haimson, and Danielle Lottridge. (2019). How to do better with gender on surveys: a guide for HCI researchers. Interactions. 26, 4 (June 2019), 62-65. DOI: https://doi.org/10.1145/3338283
  62. A. Doan, A. Y. Halevy, and Z. G. Ives. Principles of Data Integration. Morgan Kaufmann, 2012
  63. Momin M. Malik. (2019). Can algorithms themselves be biased? Medium. https://medium.com/berkman-klein-center/can-algorithms-themselves-be-biased-cffecbf2302c
  64. Fire, Michael, and Carlos Guestrin (2019). “Over-Optimization of Academic Publishing Metrics: Observing Goodhart’s Law in Action.” GigaScience 8 (giz053). https://doi.org/10.1093/gigascience/giz053.
  65. Vogelsang, A., u0026amp; Borg, M. (2019, September). Requirements engineering for machine learning: Perspectives from data scientists. In 2019 IEEE 27th International Requirements Engineering Conference Workshops (REW) (pp. 245-251). IEEE
  66. Eckersley, P. (2018). Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function). arXiv preprint arXiv:1901.00064.
  67. Partnership on AI. Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System, Requirement 5.
  68. Eckersley, P. (2018). Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function). arXiv preprint arXiv:1901.00064.https://arxiv.org/abs/1901.00064
  69. If it is not, there is likely a bug in the code. Checking a predictive model's performance on the training set cannot distinguish irreducible error (which comes from intrinsic variance of the system) from error introduced by bias and variance in the estimator; this is universal, and has nothing to do with different settings or
  70. Selbst, Andrew D. and Boyd, Danah and Friedler, Sorelle and Venkatasubramanian, Suresh and Vertesi, Janet (2018). “Fairness and Abstraction in Sociotechnical Systems”, ACM Conference on Fairness, Accountability, and Transparency (FAT*). https://ssrn.com/abstract=3265913
  71. Tools that can be used to explore and audit the predictive model fairness include FairML, Lime, IBM AI Fairness 360, SHAP, Google What-If Tool, and many others
  72. Wagstaff, K. (2012). Machine learning that matters. arXiv preprint arXiv:1206.4656. https://arxiv.org/abs/1206.4656
Table of Contents
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

Responsible Sourcing of Data Enrichment Services

PAI Staff

As AI becomes increasingly pervasive, there has been growing and warranted concern over the effects of this technology on society. To fully understand these effects, however, one must closely examine the AI development process itself, which impacts society both directly and through the models it creates. This white paper, “Responsible Sourcing of Data Enrichment Services,” addresses an often overlooked aspect of the development process and what AI practitioners can do to help improve it: the working conditions of data enrichment professionals, without whom the value being generated by AI would be impossible. This paper’s recommendations will be an integral part of the shared prosperity targets being developed by Partnership on AI (PAI) as outlined in the AI and Shared Prosperity Initiative’s Agenda.

High-precision AI models are dependent on clean and labeled datasets. While obtaining and enriching data so it can be used to train models is sometimes perceived as a simple means to an end, this process is highly labor-intensive and often requires data enrichment workers to review, classify, and otherwise manage massive amounts of data. Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face. This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind, which can have deleterious consequences for those being ignored.

Data Enrichment Choices Impact Worker Well-being

There is, however, an opportunity to make a difference. The decisions AI developers make while procuring enriched data have a meaningful impact on the working conditions of data enrichment professionals. This paper focuses on how these sourcing decisions impact workers and proposes avenues for AI developers to meaningfully improve their working conditions, outlining key worker-oriented considerations that practitioners can use as a starting point to raise conversations with internal teams and vendors. Specifically, this paper covers worker-centric considerations for AI companies making decisions in:

  • selecting data enrichment providers,
  • running pilots,
  • designing data enrichment tasks and writing instructions,
  • assigning tasks,
  • defining payment terms and pricing,
  • establishing a communication cadence with workers,
  • conducting quality assurance,
  • and offboarding workers from a project.

This paper draws heavily on insights and input gathered during semi-structured interviews with members of the AI enrichment ecosystem conducted throughout 2020 as well as during a five-part workshop series held in the fall of 2020. The workshop series brought together more than 30 professionals from different areas of the data enrichment ecosystem, including representatives from data enrichment providers, researchers and product managers at AI companies, and leaders of civil society and labor organizations. We’d like to thank all of them for their engaged participation and for valuable feedback. We’d also like to thank Elonnai Hickok for serving as the lead researcher on the project and Heather Gadonniex for her committed support and championship. Finally, this work would not be possible without the invaluable guidance, expertise, and generosity of Mary Gray.

Our intention with this paper is to aid the industry in accounting for wellbeing when making decisions about data enrichment and to set the stage for further conversations within and across AI organizations. Additional work is needed to ensure industry practices recognize, appreciate, and fairly compensate the workers conducting data enrichment. To that end, we want to use this paper as an opportunity to increase awareness amongst practitioners and launch a series of conversations. We recognize that there is a lot of variance in practices across the industry and hope to start a productive dialogue with organizations across the spectrum who are working through these questions. If you work at a company involved in building AI and want to host a conversation with your colleagues around data enrichment practices, we would love to join and help facilitate the conversation. If you are interested, please get in touch here.

To read “Responsible Sourcing of Data Enrichment Services” in full, click here.

1

Redesigning AI for Shared Prosperity: An Agenda

PAI Staff

Artificial intelligence is expected to contribute trillions of dollars to the global GDP over the coming decade, but these gains may not occur equitably or be shared widely. Today, many communities around the world face persistent underemployment, driven in part by technological advances that have divided workers into cohorts of haves and have nots. If AI advancement continues on its current trajectory, it could accelerate this economic exclusion.

This is not the only trajectory AI could be on: switching the emphasis from automating human tasks to genuinely complementing human workers can help raise these workers’ productivity while making jobs safer, more stable and rewarding, and less physically exhausting. Redesigning AI for Shared Prosperity: an Agenda is a foundational document of the AI and Shared Prosperity Initiative outlining practical questions stakeholders need to collectively find answers to in order to successfully steer AI toward expanding access to good jobs—and away from eliminating them. We are sharing this living Agenda with the community to inform aligned efforts and invite all interested stakeholders to partake in the work. (Read our press release on the Agenda here.)

The Agenda, developed under the close guidance of the Initiative’s Steering Committee and based on their deliberations, calls for the creation of shared prosperity targets: verifiable criteria the AI industry must meet to support the future of workers. These targets would consist of commitments by AI companies to create (and not destroy) good jobs—well-paying, stable, honored, and empowered ones—across the globe. The commitments could be adopted by the AI industry players either voluntarily or with regulatory encouragement.

To date, no metrics have been developed to assess the impacts of AI on job availability, wages, and quality. Additionally, no targets have been set to ensure new products do not harm workers, either in aggregate or by category of potential vulnerability. Without clear metrics and commitments, efforts to steer AI in directions that benefit workers and society are susceptible to unbacked claims of human complementarity or human augmentation. Currently, such claims are frequently made by organizations that, in reality, produce job-displacing technology or employ worker-exploiting tactics (such as invasive surveillance) to produce productivity gains. We expect that organizations genuinely seeking to complement and benefit workers with their technology, would be most interested in measuring and disclosing their impact on availability of good jobs, helping differentiate themselves from industry actors seeking to sell worker exploitation-enabling technologies masked as “worker-augmenting AI”.

The success of the targets to be developed relies on their support by critical stakeholders in the AI development and implementation ecosystem: workers, private sector stakeholders, governments, and international organizations. Support within and across multiple stakeholder categories is particularly important given the diffuse nature of AI’s development and deployment: technologies are often created in separate companies and separate geographies than where they are implemented. Directing AI in service of expanding access to good jobs offers opportunities as well as complex challenges for each set of stakeholders. The Agenda outlines questions that need to be resolved in order to align the incentives, interests, and relative powers of key stakeholders in pursuit of a shared prosperity-advancing path for AI.

As an immediate next step, the Initiative is working to conduct thorough research on workers’ experiences of AI in the workplace. The research aims to identify key categories of impact on job quality to be included in the shared prosperity targets, as well as the most effective ways to empower workers throughout the AI development and deployment process. If you are an employer or worker organizing group who would potentially be interested in participating in this research, please get in touch to learn more about our research and how you can contribute.

It is our hope that this Agenda will catalyze the research and debates around automation, the future of work, and the equitable distribution of the economic gains of AI, and specifically on steering AI’s progress to reduce inequality and support sustainable economic and social development. PAI also enthusiastically invites collaboration on the design of shared prosperity targets. For more information on the AI and Shared Prosperity Initiative and how to get involved, please visit shared-prosperity-initiative.

To read the Agenda’s Executive Summary, click here. To read “Redesigning AI for Shared Prosperity: an Agenda” in full, click here.

1

Managing the Risks of AI Research: Six Recommendations for Responsible Publication

PAI Staff

Once a niche research interest, artificial intelligence (AI) has quickly become a pervasive aspect of society with increasing influence over our lives. In turn, open questions about this technology have, in recent years, transformed into urgent ethical considerations. The Partnership on AI’s (PAI) new white paper, “Managing the Risks of AI Research: Six Recommendations for Responsible Publication,” addresses one such question: Given AI’s potential for misuse, how can AI research be disseminated responsibly?

Many research communities, such as biosecurity and cybersecurity, routinely work with information that could be used to cause harm, either maliciously or accidentally. These fields have thus established their own norms and procedures for publishing high-risk research. Thanks to breakthrough advances, AI technology has progressed rapidly in the past decade, giving the AI community less time to develop similar practices.

Recent pilots, such as OpenAI’s “staged release” of GPT-2 and the “broader impact statement” requirement at the 2020 NeurIPS conference, demonstrate a growing interest in responsible AI publication norms. Effectively anticipating and mitigating the potential negative impacts of AI research, however, will require a community-wide effort. As a first step towards developing responsible publication practices, this white paper provides recommendations for three key groups in the AI research ecosystem:

  • Individual researchers, who should disclose and report additional information in their papers and normalize discussion about the downstream consequences of research.
  • Research leadership, which should review potential downstream consequences earlier in the research pipeline and commend researchers who identify negative downstream consequences.
  • Conferences and journals, which should expand peer review criteria to include engagement with potential downstream consequences and establish separate review processes to evaluate papers based on risk and downstream consequences.

Additionally, this white paper includes an appendix which seeks to disambiguate a variety of terms related to responsible research which are often conflated: “research integrity,” “research ethics,” “research culture,” “downstream consequences,” and “broader impacts.”

This document represents an artifact that can be used as a basis for further discussion, and we seek feedback on it to inform future iterations of the recommendations it contains. Our aim is to help build our capacity as a field to anticipate downstream consequences and mitigate potential risks.

To read “Managing the Risks of AI Research: Six Recommendations for Responsible Publication” in full, click here.

1

Framework for Promoting Workforce Well-being in the AI-Integrated Workplace

PAI Staff

Executive Summary

Executive Summary

The Partnership on AI’s “Framework for Promoting Workforce Well-being in the AI- Integrated Workplace” provides a conceptual framework and a set of tools to guide employers, workers, and other stakeholders towards promoting workforce well-being throughout the process of introducing AI into the workplace.

As AI technologies become increasingly prevalent in the workplace, our goal is to place  workforce well-being at the center of this technological change and resulting metamorphosis in work, well-being, and society, and provide a starting point to discuss and create pragmatic solutions.

The paper categorizes aspects of workforce well-being that should be prioritized and protected throughout AI integration into six pillars. Human rights is the first pillar, and supports all aspects of workforce well-being. The five additional pillars of well-being include physical, financial, intellectual, emotional well-being, as well as purpose and meaning. The Framework presents a set of recommendations that organizations can use to guide organizational thinking about promoting well-being throughout the integration of AI in the workplace.

The Framework is designed to initiate and inform discussions on the impact of AI that strengthen the reciprocal obligations between workers and employers, while grounding that discourse in six pillars of worker well-being.

We recognize that the impacts of AI are still emerging and often difficult to distinguish from the impact of broader digital transformation, leading to organizations being challenged to address the unknown and potentially fundamental changes that AI may bring to the workplace. We strongly advise that management collaborate with workers directly or with worker representatives in the development, integration, and use of AI systems in the workplace, as well as in the discussion and implementation of this Framework.

We acknowledge that the contexts for having a dialogue on worker well-being may differ. For instance, in some countries there are formal structures in place such as workers’ councils that facilitate the dialogue between employers and workers. In other cases, countries or sectors do not have these institutions in place, nor a tradition for dialogue between the two parties. In all cases, the aim of this Framework is to be a useful tool for all parties to collaboratively ensure that the introduction of AI technologies goes hand in hand with a commitment to worker well-being. The importance of making such commitment in earnest has been highlighted by the COVID-19 public health and economic crises which exposed and exacerbated the long-standing inequities in the treatment of workers. Making sure those are not perpetuated further with the introduction of AI systems into the workplace requires deliberate efforts and will not happen automatically.

Recommendations

Recommendations

This section articulates a set of recommendations to guide organizational approaches and thinking on what to promote, what to be cognizant of, and what to protect against, in terms of worker and workforce well-being while integrating AI into the workplace. These recommendations are organized along the six well-being pillars identified above, and are meant to serve as a starting place for organizations seeking to apply the present Framework to promote workforce well-being throughout the process of AI integration. Ideally, these can be recognized formally as organizational commitments at the board level and subsequently discussed openly and regularly with the entire organization.

The “Framework for Promoting Workforce Well-Being in the AI-Integrated Workplace” is a product of the Partnership on AI’s AI, Labor, and the Economy (AILE) Expert Group, formed through a collaborative process of research, scoping, and iteration. In August 2019, at a workshop called “Workforce Well-being in the AI-Integrated Workplace” co-hosted by PAI and the Ford Foundation, this work received additional input from experts, academics, industry, labor unions, and civil society. Though this document reflects the inputs of many PAI Partner organizations, it should not under any circumstances be read as representing the views of any particular organization or individual within this Expert Group, or any specific PAI Partner.

Acknowledgements

The Partnership on AI is deeply grateful for the input of many colleagues and partners, especially Elonnai Hickok, Ann Skeet, Christina Colclough, Richard Zuroff, Jonnie Penn as well as the participants of the August 2019 workshop co-hosted by PAI and the Ford Foundation. We thank Arindrajit Basu, Pranav Bidaire, and Saumyaa Naidu for the research support.

1