AI and Job Quality

Insights from Frontline Workers

PAI Staff

Executive Summary

Based on an international study of on-the-job experiences with AI, this report draws from workers’ insights to point the way toward a better future for workplace AI. In addition to identifying common themes among workers’ stories, it provides guidance for key stakeholders who want to make a positive impact. These opportunities for impact can be downloaded individually as audience-specific summaries below.

Opportunities for impact for:

Across industries and around the world, AI is changing work. In the coming years, this rapidly advancing technology has the potential to fundamentally reshape humanity’s relationship with labor. As highlighted by previous Partnership on AI (PAI) research, however, the development and deployment of workplace AI often lacks input from an essential group of experts: the people who directly interact with these systems in their jobs.

Bringing the perspectives of workers into this conversation is both a moral and pragmatic imperative. Despite the direct impact of workplace AI on them, workers rarely have direct influence in AI’s creation or decisions about its implementation. This neglect raises clear concerns about unforeseen or overlooked negative impacts on workers. It also undermines the optimal use of AI from a corporate perspective.

This PAI report, based on an international study of on-the-job experiences with AI, seeks to address this gap. Through journals and interviews, workers in India, sub-Saharan Africa, and the United States shared their stories about workplace AI. From their reflections, PAI identified five common themes:

  1. Executive and managerial decisions shape AI’s impacts on workers, for better and worse. This starts with decisions about business models and operating models, continues through technology acquisitions and implementations, and finally manifests in direct impacts to workers.
  2. Workers have a genuine appreciation for some aspects of AI in their work and how it helps them in their jobs. Their spotlights here point the way to more mutually beneficial approaches to workplace AI.
  3. Workplace AI’s harms are not new or novel — they are repetitions or extensions of harms from earlier technologies and, as such, should be possible to anticipate, mitigate, and eliminate.
  4. Current implementations of AI often serve to reduce workers’ ability to exercise their human skills and talents. Skills like judgment, empathy, and creativity are heavily constrained in these implementations. To the extent that the future of AI is intended to increase humans’ ability to use these talents, the present of AI is sending many workers in the opposite direction.
  5. Empowering workers early in AI development and implementation increases the opportunities to attain the aforementioned benefits and avoid the harms. Workers’ deep experience in their own roles means they should be treated as subject-matter experts throughout the design and implementation process.

In addition, PAI drew from these themes to offer opportunities for impact for the major stakeholders in this space:

  1. AI-implementing companies, who can commit to AI deployments that do not decrease employee job quality.
  2. AI-creating companies, who can center worker well-being and participation in their values, practices, and product designs.
  3. Workers, unions, and worker organizers, who can work to influence and participate in decisions about technology purchases and implementations.
  4. Policymakers, who can shape the environments in which AI products are developed, sold, and implemented.
  5. Investors, who can account for the downside risks posed by practices harmful to workers and the potential value created by worker-friendly technologies.

The actions of each of these groups have the potential to both increase the prosperity enabled by AI technologies and share it more broadly. Together, we can steer AI in a direction that ensures it will benefit workers and society as a whole.

AI and Job Quality

Executive Summary

Introduction

The need for workers’ perspectives on workplace AI

The contributions of this report

Our Approach

Key research questions

Research methods

Site selection

Who we learned from

Participant recruitment

Major Themes and Findings

Theme 1: Executive and managerial decisions shape AI’s impacts on workers, for better and worse

Theme 2: Workers appreciate how some uses of AI have positively changed their jobs

Theme 3: Workplace AI harms repeat, continue, or intensify known possible harms from earlier technologies

Theme 4: Current implementations of AI in work are reducing workers’ opportunities for autonomy, judgment, empathy, and creativity

Theme 5: Empowering workers early in AI development and implementation increases opportunities to implement AI that benefits workers as well as their employers

Opportunities for Impact

Stakeholder 1: AI-implementing companies

Stakeholder Group 2: AI-creating companies

Stakeholder Group 3: Workers, unions, and worker organizers

Stakeholder Group 4: Policymakers

Stakeholder Group 5: Investors

Conclusion

Acknowledgements

Appendix 1: Detailed Site and Technology Descriptions

Appendix 2: Research Methods

Sources Cited

  1. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf.
  2. Michael Chui et al., “Global AI Survey 2021,” Survey (McKinsey u0026amp; Company, December 8, 2021), https://ceros.mckinsey.com/global-ai-survey-2020-a-desktop-3-1/p/1
  3. Jacques Bughin et al., “Artificial Intelligence: The Next Digital Frontier?,” Discussion Paper (McKinsey Global Institute, June 2017), https://www.mckinsey.com/~/media/mckinsey/industries/advanced%20electronics/our%20insights/how%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/mgi-artificial-intelligence-discussion-paper.ashx
  4. Partnership on AI, “Redesigning AI for Shared Prosperity: An Agenda” (Partnership on AI, May 2021), https://partnershiponai.org/paper/redesigning-ai-agenda/
  5. David Autor, David A. Mindell, and Elisabeth B. Reynolds, The Work of the Future: Building Better Jobs in an Age of Intelligent Machines (The MIT Press, 2022), https://doi.org/10.7551/mitpress/14109.001.0001
  6. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf
  7. Lant Pritchett, “The Future of Jobs Is Facing One, Maybe Two, of the Biggest Price Distortions Ever,” Middle East Development Journal 12, no. 1 (January 2, 2020): 131–56, https://doi.org/10.1080/17938120.2020.1714347
  8. James K. Harter, Frank L. Schmidt, and Theodore L. Hayes, “Business-Unit-Level Relationship between Employee Satisfaction, Employee Engagement, and Business Outcomes: A Meta-Analysis,” Journal of Applied Psychology 87, no. 2 (2002): 268–79, https://doi.org/10.1037/0021-9010.87.2.268
  9. Kaoru Ishikawa, What Is Total Quality Control? The Japanese Way, trans. David John Lu (Englewood Cliffs, N.J.: Prentice-Hall, 1985)
  10. Gary P. Pisano, The Development Factory: Unlocking the Potential of Process Innovation (Harvard Business Press, 1997)
  11. Terje Slåtten and Mehmet Mehmetoglu, “Antecedents and Effects of Engaged Frontline Employees: A Study from the Hospitality Industry,” in New Perspectives in Employee Engagement in Human Resources (Emerald Group Publishing, 2015)
  12. Kayhan Tajeddini, Emma Martin, and Levent Altinay, “The Importance of Human-Related Factors on Service Innovation and Performance,” International Journal of Hospitality Management 85 (February 1, 2020): 102431, https://doi.org/10.1016/j.ijhm.2019.102431
  13. Sergio Fernandez and David W. Pitts, “Understanding Employee Motivation to Innovate: Evidence from Front Line Employees in United States Federal Agencies,” Australian Journal of Public Administration 70, no. 2 (2011): 202–22, https://doi.org/10.1111/j.1467-8500.2011.00726.x
  14. Edward P. Lazear, “Compensation and Incentives in the Workplace,” Journal of Economic Perspectives 32, no. 3 (August 2018): 195–214, https://doi.org/10.1257/jep.32.3.195
  15. Joan Robinson, The Economics of Imperfect Competition (Springer, 1969)
  16. José Azar, Ioana Marinescu, and Marshall I. Steinbaum, “Labor Market Concentration,” Working Paper, Working Paper Series (National Bureau of Economic Research, December 2017), https://doi.org/10.3386/w24147
  17. Alan Manning, Monopsony in Motion: Imperfect Competition in Labor Markets, Monopsony in Motion (Princeton University Press, 2013), https://doi.org/10.1515/9781400850679
  18. Caitlin Lustig et al., “Algorithmic Authority: The Ethics, Politics, and Economics of Algorithms That Interpret, Decide, and Manage,” in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’16 (New York, NY, USA: Association for Computing Machinery, 2016), 1057–62, https://doi.org/10.1145/2851581.2886426
  19. Aiha Nguyen, “The Constant Boss: Work Under Digital Surveillance” (Data and Society, May 2021), https://datasociety.net/library/the-constant-boss/
  20. Matt Scherer, “Warning: Bossware May Be Hazardous to Your Health” (Center for Democracy u0026amp; Technology, July 2021), https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf
  21. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019)
  22. Alexandra Mateescu and Aiha Nguyen, “Algorithmic Management in the Workplace,” Explainer (Data and Society, February 2019), https://datasociety.net/wp-content/uploads/2019/02/DS_Algorithmic_Management_Explainer.pdf
  23. Daniel Schneider and Kristen Harknett, “Schedule Instability and Unpredictability and Worker and Family Health and Wellbeing,” Working Paper (Washington Center for Equitable Growth, September 2016), http://cdn.equitablegrowth.org/wp-content/uploads/2016/09/12135618/091216-WP-Schedule-instability-and-unpredictability.pdf
  24. V.B. Dubal. “Wage Slave or Entrepreneur?: Contesting the Dualism of Legal Worker Identities.” California Law Review 105, no. 1 (2017): 65–123, https://www.jstor.org/stable/24915689
  25. Ramiro Albrieu, ed., Cracking the Future of Work: Automation and Labor Platforms in the Global South, 2021, https://fowigs.net/wp-content/uploads/2021/10/Cracking-the-future-of-work.-Automation-and-labor-platforms-in-the-Global-South-FOWIGS.pdf
  26. Phoebe V. Moore, “OSH and the Future of Work: Benefits and Risks of Artificial Intelligence Tools in Workplaces,” Discussion Paper (European Agency for Safety and Health at Work, 2019), https://osha.europa.eu/en/publications/osh-and-future-work-benefits-and-risks-artificial-intelligence-tools-workplaces
  27. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015)
  28. Ifeoma Ajunwa, “The ‘Black Box’ at Work,” Big Data u0026amp; Society 7, no. 2 (July 1, 2020): 2053951720966181, https://doi.org/10.1177/2053951720938093
  29. Isabel Ebert, Isabelle Wildhaber, and Jeremias Adams-Prassl, “Big Data in the Workplace: Privacy Due Diligence as a Human Rights-Based Approach to Employee Privacy Protection,” Big Data u0026amp; Society 8, no. 1 (January 1, 2021): 20539517211013052, https://doi.org/10.1177/20539517211013051
  30. Andrea Dehlendorf and Ryan Gerety, “The Punitive Potential of AI,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/the-punitive-potential-of-ai/
  31. Partnership on AI, “Framework for Promoting Workforce Well-Being in the AI-Integrated Workplace” (Partnership on AI, August 2020), https://partnershiponai.org/paper/workforce-wellbeing/
  32. Karen Hao, “Artificial Intelligence Is Creating a New Colonial World Order,” MIT Technology Review, accessed July 24, 2022, https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/
  33. Shakir Mohamed, Marie-Therese Png, and William Isaac, “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence,” Philosophy u0026amp; Technology 33 (December 1, 2020), https://doi.org/10.1007/s13347-020-00405-8
  34. Aarathi Krishnan et al., “Decolonial AI Manyfesto,” https://manyfesto.ai/
  35. OECD.AI (2021), powered by EC/OECD (2021). “Database of National AI Policies.” https://oecd.ai/en/dashboards
  36. Kofi Yeboah, “Artificial Intelligence in Sub-Saharan Africa: Ensuring Inclusivity.” (Paradigm Initiative, December 2021), https://paradigmhq.org/report/artificial-intelligence-in-sub-saharan-africa-ensuring-inclusivity/
  37. Adapted from Qualtrics’ employee lifecycle model, “Employee Lifecycle: The 7 Stages Every Employer Must Understand and Improve,” Qualtrics, https://www.qualtrics.com/experience-management/employee/employee-lifecycle/
  38. Mayank Kumar Golpelwar, Global Call Center Employees in India: Work and Life between Globalization and Tradition (Springer, 2015)
  39. Hye Jin Rho, Shawn Fremstad, and Hayley Brown, “A Basic Demographic Profile of Workers in Frontline Industries” (Center for Economic and Policy Research, April 2020), https://cepr.net/wp-content/uploads/2020/04/2020-04-Frontline-Workers.pdf
  40. U.S. Bureau of Labor Statistics. “All Employees, Warehousing and Storage.” FRED, Federal Reserve Bank of St. Louis. FRED, Federal Reserve Bank of St. Louis, July 2022. https://fred.stlouisfed.org/series/CES4349300001
  41. Lee Rainie et al., “AI and Human Enhancement: Americans’ Openness Is Tempered by a Range of Concerns” (Pew Research Center, March 2022), https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2022/03/PS_2022.03.17_AI-HE_REPORT.pdf
  42. James Manyika et al., “Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wages” (McKinsey Global Institute, November 28, 2017), https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
  43. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019)
  44. International Labour Office. “Women and Men in the Informal Economy: A Statistical Picture (Third Edition).” International Labour Office, 2018. http://www.ilo.org/wcmsp5/groups/public/u002du002d-dgreports/u002du002d-dcomm/documents/publication/wcms_626831.pdf
  45. International Labour Office. “Women and Men in the Informal Economy: A Statistical Picture (Third Edition).” International Labour Office, 2018. http://www.ilo.org/wcmsp5/groups/public/u002du002d-dgreports/u002du002d-dcomm/documents/publication/wcms_626831.pdf
  46. OECD, and International Labour Organization. “Tackling Vulnerability in the Informal Economy,” 2019. https://www.oecd-ilibrary.org/content/publication/939b7bcd-en
  47. James C. Scott, Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed, Yale Agrarian Studies (New Haven, Conn.: Yale Univ. Press, 2008)
  48. Reema Nanavaty, Expert interview with Reema Nanavaty, Director of Self Employed Women’s Association (SEWA), July 11, 2022
  49. Paul E. Spector, “Perceived Control by Employees: A Meta-Analysis of Studies Concerning Autonomy and Participation at Work,” Human Relations 39, no. 11 (November 1, 1986): 1005–16, https://doi.org/10.1177/001872678603901104
  50. Henry Ongori, “A Review of the Literature on Employee Turnover,” African Journal of Business Management 1, no. 3 (June 30, 2007): 049–054, https://academicjournals.org/article/article1380537420_Ongori.pdf
  51. See Virginia Doellgast and Sean O’Brady, “Making Call Center Jobs Better: The Relationship between Management Practices and Worker Stress,” June 1, 2020, https://ecommons.cornell.edu/handle/1813/74307 for additional detail and impacts of punitive managerial uses of monitoring technology in call centers, including increased worker stress
  52. Aiha Nguyen, “The Constant Boss: Work Under Digital Surveillance” (Data and Society, May 2021), https://datasociety.net/library/the-constant-boss/
  53. Matt Scherer, “Warning: Bossware May Be Hazardous to Your Health” (Center for Democracy u0026amp; Technology, July 2021), https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf
  54. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019)
  55. Alexandra Mateescu and Aiha Nguyen, “Algorithmic Management in the Workplace,” Explainer (Data and Society, February 2019), https://datasociety.net/wp-content/uploads/2019/02/DS_Algorithmic_Management_Explainer.pdf
  56. Andrea Dehlendorf and Ryan Gerety, “The Punitive Potential of AI,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/the-punitive-potential-of-ai/
  57. Human Impact Partners and Warehouse Worker Resource Center, “The Public Health Crisis Hidden in Amazon Warehouses,” January 2021, https://humanimpact.org/wp-content/uploads/2021/01/The-Public-Health-Crisis-Hidden-In-Amazon-Warehouses-HIP-WWRC-01-21.pdf
  58. V.B. Dubal. “Wage Slave or Entrepreneur?: Contesting the Dualism of Legal Worker Identities.” California Law Review 105, no. 1 (2017): 65–123, https://www.jstor.org/stable/24915689
  59. Ramiro Albrieu, ed., Cracking the Future of Work: Automation and Labor Platforms in the Global South, 2021, https://fowigs.net/wp-content/uploads/2021/10/Cracking-the-future-of-work.-Automation-and-labor-platforms-in-the-Global-South-FOWIGS.pdf
  60. Daniel Schneider and Kristen Harknett, “Schedule Instability and Unpredictability and Worker and Family Health and Wellbeing,” Working Paper (Washington Center for Equitable Growth, September 2016), http://cdn.equitablegrowth.org/wp-content/uploads/2016/09/12135618/091216-WP-Schedule-instability-and-unpredictability.pdf
  61. Arvind Narayanan, “How to Recognize AI Snake Oil,” https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
  62. Frederike Kaltheuner, ed., Fake AI (Meatspace Press, 2021), https://fakeaibook.com
  63. Aiha Nguyen, “The Constant Boss: Work Under Digital Surveillance” (Data and Society, May 2021), https://datasociety.net/library/the-constant-boss/
  64. Strategic Organizing Center, “Primed for Pain,” May 2021, https://thesoc.org/wp-content/uploads/2021/02/PrimedForPain.pdf
  65. Alessandro Delfanti and Bronwyn Frey, “Humanly Extended Automation or the Future of Work Seen through Amazon Patents,” Science, Technology, u0026amp; Human Values 46, no. 3 (May 1, 2021): 655–82, https://doi.org/10.1177/0162243920943665
  66. Phoebe V. Moore, “OSH and the Future of Work: Benefits and Risks of Artificial Intelligence Tools in Workplaces,” Discussion Paper (European Agency for Safety and Health at Work, 2019), https://osha.europa.eu/en/publications/osh-and-future-work-benefits-and-risks-artificial-intelligence-tools-workplaces
  67. Strategic Organizing Center, “Primed for Pain,” May 2021, https://thesoc.org/wp-content/uploads/2021/02/PrimedForPain.pdf
  68. Annette Bernhardt, Lisa Kresge, and Reem Suleiman, “Data and Algorithms at Work: The Case for Worker” (UC Berkeley Labor Center, November 2021), https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf
  69. Andrea Dehlendorf and Ryan Gerety, “The Punitive Potential of AI,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/the-punitive-potential-of-ai/
  70. Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible Are Jobs to Computerisation?,” Technological Forecasting and Social Change 114 (January 1, 2017): 254–80, https://doi.org/10.1016/j.techfore.2016.08.019
  71. “These Are the Top 10 Job Skills of Tomorrow – and How Long It Takes to Learn Them,” World Economic Forum, https://www.weforum.org/agenda/2020/10/top-10-work-skills-of-tomorrow-how-long-it-takes-to-learn-them/
  72. Daniel Susskind, “Technological Unemployment,” in The Oxford Handbook of AI Governance, ed. Justin Bullock et al. (Oxford University Press), https://doi.org/10.1093/oxfordhb/9780197579329.013.42
  73. Christopher Mims, “Self-Driving Cars Could Be Decades Away, No Matter What Elon Musk Said,” WSJ, https://www.wsj.com/articles/self-driving-cars-could-be-decades-away-no-matter-what-elon-musk-said-11622865615
  74. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019)
  75. Erik Brynjolfsson, “The Turing Trap: The Promise u0026amp; Peril of Human-Like Artificial Intelligence,” January 11, 2022, https://doi.org/10.48550/arXiv.2201.04200
  76. World Economic Forum. “Positive AI Economic Futures.” Insight Report. World Economic Forum, November 2021. https://www.weforum.org/reports/positive-ai-economic-futures/
  77. Nithya Sambasivan and Rajesh Veeraraghavan, “The Deskilling of Domain Expertise in AI Development,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22 (New York, NY, USA: Association for Computing Machinery, 2022), 1–14, https://doi.org/10.1145/3491102.3517578
  78. Sabrina Genz, Lutz Bellmann, and Britta Matthes, “Do German Works Councils Counter or Foster the Implementation of Digital Technologies?,” Jahrbücher Für Nationalökonomie Und Statistik 239, no. 3 (June 1, 2019): 523–64, https://doi.org/10.1515/jbnst-2017-0160
  79. Alan G. Robinson and Dean M. Schroeder, “The Role of Front-Line Ideas in Lean Performance Improvement,” Quality Management Journal 16, no. 4 (January 1, 2009): 27–40, https://doi.org/10.1080/10686967.2009.11918248
  80. Jeffrey K. Liker, The Toyota Way: 14 Management Principles From the World’s Greatest Manufacturer (McGraw Hill Professional, 2003)
  81. Taiichi Ohno, Toyota Production System: Beyond Large-Scale Production (CRC Press, 1988)
  82. Kayhan Tajeddini, Emma Martin, and Levent Altinay, “The Importance of Human-Related Factors on Service Innovation and Performance,” International Journal of Hospitality Management 85 (February 1, 2020): 102431, https://doi.org/10.1016/j.ijhm.2019.102431
  83. Katherine C. Kellogg, Mark Sendak, and Suresh Balu, “AI on the Front Lines,” MIT Sloan Management Review, May 4, 2022, https://sloanreview.mit.edu/article/ai-on-the-front-lines/
  84. Zeynep Ton, “The Good Jobs Solution,” Harvard Business Review, 2017, 32. https://goodjobsinstitute.org/wp-content/uploads/2018/03/Good-Jobs-Solution-Full-Report.pdf
  85. Abigail Gilbert et al., “Case for Importance: Understanding the Impacts of Technology Adoption on ‘Good Work’” (Institute for the Future of Work, May 2022), https://uploads-ssl.webflow.com/5f57d40eb1c2ef22d8a8ca7e/62a72d3439edd66ed6f79654_IFOW_Case%20for%20Importance.pdf
  86. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf
  87. Julian Posada, “The Future of Work Is Here: Toward a Comprehensive Approach to Artificial Intelligence and Labour,” Ethics of AI in Context, 2020, http://arxiv.org/abs/2007.05843
  88. Jeffrey Brown, “The Role of Attrition in AI’s ‘Diversity Problem’” (Partnership on AI, April 2021), https://partnershiponai.org//wp-content/uploads/dlm_uploads/2022/04/PAI_researchpaper_aftertheoffer.pdf
  89. Tina M Park, “Making AI Inclusive: 4 Guiding Principles for Ethical Engagement” (Partnership on AI, July 2022), https://partnershiponai.org//wp-content/uploads/dlm_uploads/2022/07/PAI_whitepaper_making-ai-inclusive.pdf
  90. Fabio Urbina et al., “Dual Use of Artificial-Intelligence-Powered Drug Discovery,” Nature Machine Intelligence 4, no. 3 (March 2022): 189–91, https://doi.org/10.1038/s42256-022-00465-9
  91. Aarathi Krishnan et al., “Decolonial AI Manyfesto,” accessed July 24, 2022, https://manyfesto.ai/
  92. Lama Nachman, “Beyond the Automation-Only Approach,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/beyond-the-automation-only-approach/
  93. Christina Colclough, “Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table,” in Digital Work in the Planetary Market, –International Development Research Centre Series (MIT Press, 2022), https://idl-bnc-idrc.dspacedirect.org/bitstream/handle/10625/61034/IDL-61034.pdf
  94. Christina Colclough, “When Algorithms Hire and Fire,” International Union Rights 25, no. 3 (2018): 6–7. https://muse.jhu.edu/article/838277/summary
  95. Brishen Rogers, “The Law and Political Economy of Workplace Technological Change,” Harvard Civil Rights-Civil Liberties Law Review 55 (2020): 531
  96. Wilneida Negrón, “Little Tech Is Coming for Workers” (Coworker.org, 2021), https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf
  97. Jeremias Adams-Prassl, “What If Your Boss Was an Algorithm? Economic Incentives, Legal Challenges, and the Rise of Artificial Intelligence at Work,” Comparative Labor Law u0026amp; Policy Journal 41 (2021 2019): 123
  98. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf
  99. Kofi Yeboah, “Artificial Intelligence in Sub-Saharan Africa: Ensuring Inclusivity.” (Paradigm Initiative, December 2021), https://paradigmhq.org/report/artificial-intelligence-in-sub-saharan-africa-ensuring-inclusivity/
  100. Fekitamoeloa ‘Utoikamanu, “Closing the Technology Gap in Least Developed Countries,” United Nations (United Nations), accessed July 25, 2022, https://www.un.org/en/chronicle/article/closing-technology-gap-least-developed-countries
  101. Annette Bernhardt, Lisa Kresge, and Reem Suleiman, “Data and Algorithms at Work: The Case for Worker” (UC Berkeley Labor Center, November 2021), https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf
  102. Allison Levitsky, “California Might Require Employers to Disclose Workplace Surveillance,” Protocol, April 21, 2022, https://www.protocol.com/bulletins/ab-1651-california-workplace-surveillance
  103. “The EU Artificial Intelligence Act,” The AI Act, September 7, 2021, https://artificialintelligenceact.eu/
  104. Daron Acemoglu, Andrea Manera, and Pascual Restrepo, “Does the US Tax Code Favor Automation?,” Working Paper, Working Paper Series (National Bureau of Economic Research, April 2020), https://doi.org/10.3386/w27052
  105. Emmanuel Moss et al., “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest” (Data and Society, June 2021), https://datasociety.net/wp-content/uploads/2021/06/Assembling-Accountability.pdf
  106. Kofi Yeboah, “Artificial Intelligence in Sub-Saharan Africa: Ensuring Inclusivity.” (Paradigm Initiative, December 2021), https://paradigmhq.org/report/artificial-intelligence-in-sub-saharan-africa-ensuring-inclusivity/
  107. Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf
  108. Business Roundtable, “Statement on the Purpose of a Corporation,” July 2021, https://s3.amazonaws.com/brt.org/BRT-StatementonthePurposeofaCorporationJuly2021.pdf
  109. Larry Fink, “Larry Fink’s Annual 2022 Letter to CEOs,” accessed May 27, 2022, https://www.blackrock.com/corporate/investor-relations/larry-fink-ceo-letter
  110. Katanga Johnson, “U.S. SEC Chair Provides More Detail on New Disclosure Rules, Treasury Market Reform | Reuters,” https://www.reuters.com/business/sustainable-business/sec-considers-disclosure-mandate-range-climate-metrics-2021-06-23/
  111. “Your Guide to Amazon’s 2022 Shareholder Event,” United for Respect, accessed May 27, 2022, https://united4respect.org/amazon-shareholders/
Table of Contents
1
2
3
4
5
6
7
8
9
10

ABOUT ML Foundational Resource

Overview


ABOUT ML (Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles) is a multi-year, multi-stakeholder initiative aimed at building transparency into the AI development process, industry-wide, through full lifecycle documentation. On this page, you will find the collected outputs of ABOUT ML, a library of resources designed to help organizations and individuals begin implementing transparency at scale. To further increase the usability of these resources, recommended reading plans for different readers are provided below.

Learn more about the origins of ABOUT ML and contributors to the project here.

Recommended Reading Plans

At the foundation of these resources lies the newly revised ABOUT ML Reference Document, which both identifies transparency goals and offers suggestions on how they might be achieved. Using principles provided by the Reference Document and insights about implementation gathered through our research, PAI plans to release additional ML documentation guides, templates, recommendations, and other artifacts. These future artifacts will also be available on this page.

Read the full ABOUT ML Reference Document

 

Recommended Reading Plans for…


ML System Developers/Deployers

ML system developers/deployers are encouraged to do a deep dive exploration of Section 3: Preliminary Synthesized Documentation Suggestions and use it to highlight gaps in their current understanding of both data- and model-related documentation and planning needs. This group will most benefit from further participation in the ABOUT ML effort by engaging with the community in the forthcoming online forum and by testing the efficacy and applicability of templates and specifications to be published in the PLAYBOOK and PILOTS, which will be developed based on use cases as an opportunity to implement ML documentation processes within an organization.


ML System Procurers

ML system procurers might explore Section 2.2: Documentation to Operationalize AI Ethics Goals to get ideas about what concepts to include as requirements for models and data in future requests for proposals relevant to ML systems. Additionally, they could use Section 2.3: Research Themes on Documentation for Transparency to shape conversations with the business owners and requirements writers to further elicit detailed key performance indicators and measures for success for any procured ML systems.


Users of ML System APIs and/or Experienced End Users of ML Systems

Users of ML system APIs and/or experienced end users of ML systems might skim the document and review all of the coral Quick Guides to get a better understanding of how ML concepts are relevant to many of the tools they regularly use. A review of Section 2.1: Demand for Transparency and AI Ethics in ML Systems will provide insight into conditions where it is appropriate to use ML systems. This section also explains how transparency is a foundation for both internal accountability among the developers, deployers, and API users of an ML system and external accountability to customers, impacted non-users, civil society organizations, and policymakers.


Internal Compliance Teams

Internal compliance teams are encouraged to explore Section 4: Current Challenges of Implementing Documentation and use it to shape conversations with developer/deployment teams to find ways to measure compliance throughout the Machine Learning Lifecycle (MLLC).


External Auditors

External auditors could skim Appendix: Compiled List of Documentation Questions and familiarize themselves with high-level concepts as well as tactically operationalized tenets to look for in their determination of whether or not an ML System is well-documented.


Lay Users of ML Systems and/or Members of Low-Income Communities

Lay users of ML systems and/or members of low-income communities might skim the document and review all of the blue “How We Define” boxes in order to get an overarching understanding of the text’s contents. These users are encouraged to continue learning ABOUT ML systems by exploring how they might impact their everyday lives. Additional insights can be gathered from the Glossary section of the ABOUT ML Reference Document.

Framework for Promoting Workforce Well-being in the AI-Integrated Workplace

PAI Staff

Executive Summary

Executive Summary

The Partnership on AI’s “Framework for Promoting Workforce Well-being in the AI- Integrated Workplace” provides a conceptual framework and a set of tools to guide employers, workers, and other stakeholders towards promoting workforce well-being throughout the process of introducing AI into the workplace.

As AI technologies become increasingly prevalent in the workplace, our goal is to place  workforce well-being at the center of this technological change and resulting metamorphosis in work, well-being, and society, and provide a starting point to discuss and create pragmatic solutions.

The paper categorizes aspects of workforce well-being that should be prioritized and protected throughout AI integration into six pillars. Human rights is the first pillar, and supports all aspects of workforce well-being. The five additional pillars of well-being include physical, financial, intellectual, emotional well-being, as well as purpose and meaning. The Framework presents a set of recommendations that organizations can use to guide organizational thinking about promoting well-being throughout the integration of AI in the workplace.

The Framework is designed to initiate and inform discussions on the impact of AI that strengthen the reciprocal obligations between workers and employers, while grounding that discourse in six pillars of worker well-being.

We recognize that the impacts of AI are still emerging and often difficult to distinguish from the impact of broader digital transformation, leading to organizations being challenged to address the unknown and potentially fundamental changes that AI may bring to the workplace. We strongly advise that management collaborate with workers directly or with worker representatives in the development, integration, and use of AI systems in the workplace, as well as in the discussion and implementation of this Framework.

We acknowledge that the contexts for having a dialogue on worker well-being may differ. For instance, in some countries there are formal structures in place such as workers’ councils that facilitate the dialogue between employers and workers. In other cases, countries or sectors do not have these institutions in place, nor a tradition for dialogue between the two parties. In all cases, the aim of this Framework is to be a useful tool for all parties to collaboratively ensure that the introduction of AI technologies goes hand in hand with a commitment to worker well-being. The importance of making such commitment in earnest has been highlighted by the COVID-19 public health and economic crises which exposed and exacerbated the long-standing inequities in the treatment of workers. Making sure those are not perpetuated further with the introduction of AI systems into the workplace requires deliberate efforts and will not happen automatically.

Recommendations

Recommendations

This section articulates a set of recommendations to guide organizational approaches and thinking on what to promote, what to be cognizant of, and what to protect against, in terms of worker and workforce well-being while integrating AI into the workplace. These recommendations are organized along the six well-being pillars identified above, and are meant to serve as a starting place for organizations seeking to apply the present Framework to promote workforce well-being throughout the process of AI integration. Ideally, these can be recognized formally as organizational commitments at the board level and subsequently discussed openly and regularly with the entire organization.

The “Framework for Promoting Workforce Well-Being in the AI-Integrated Workplace” is a product of the Partnership on AI’s AI, Labor, and the Economy (AILE) Expert Group, formed through a collaborative process of research, scoping, and iteration. In August 2019, at a workshop called “Workforce Well-being in the AI-Integrated Workplace” co-hosted by PAI and the Ford Foundation, this work received additional input from experts, academics, industry, labor unions, and civil society. Though this document reflects the inputs of many PAI Partner organizations, it should not under any circumstances be read as representing the views of any particular organization or individual within this Expert Group, or any specific PAI Partner.

Acknowledgements

The Partnership on AI is deeply grateful for the input of many colleagues and partners, especially Elonnai Hickok, Ann Skeet, Christina Colclough, Richard Zuroff, Jonnie Penn as well as the participants of the August 2019 workshop co-hosted by PAI and the Ford Foundation. We thank Arindrajit Basu, Pranav Bidaire, and Saumyaa Naidu for the research support.

AI, Labor, and the Economy Case Study Compendium

PAI Staff

Preface

Preface

The AI, Labor, and Economy Case Studies Compendium is a work product of the Partnership on AI’s “AI, Labor, and the Economy” (AILE) Working Group, formed through a collaborative process of research scoping and iteration. Though this work product reflects the inputs of many members of PAI, it should not be read as representing the views of any particular organization or individual within this Working Group, or an entity within PAI at-large.

The Partnership on AI (PAI) is a 501(c)3 nonprofit organization established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

One of PAI’s significant program lines is a series of Working Groups reflective of its Thematic Pillars, which are a driving force in research and best practice generation. The Partnership’s activities are deliberately determined by its coalition of over 80 members, including civil society groups, corporate users of AI, and numerous academic artificial intelligence research labs, but from the outset of the organization, the intention has been to create a place for open critique and reflection. Crucially, the Partnership is an independent organization; though supported and shaped by our Partner community, the Partnership is ultimately more than the sum of its parts and will make independent determinations to which its Partners will collectively contribute, but never individually dictate. PAI provides staff administrative and project management support to Working Groups, oversees project selection, and provides financial resources or direct research support to projects as needed.

AI, Labor, and the Economy Case Study Compendium

Preface

Objectives and Scope

Subject Diversity and Common Motifs 

Themes and Observations

Terms and AI techniques used

Methodology

Limitations and Further Work

Conclusion

Appendix

Sources Cited

  1. See Acknowledgements for more information
  2. Researchers have argued for the need for “more systematic collection of the use of these technologies at the firm level.” The case study project intends to provide quantitative and qualitative data at the firm level. For more, see “AI, Labor, Productivity and the Need for Firm-Level Data,”Manav Raj and Robert Seamans, April 2018.
  3. In business circles, many pre-established techniques such as pattern-matching heuristics, or linear regression and other forms of statistical data analysis, have recently been rebranded as “AI” (and in the case of statistical regression, also as “ML”). We accept these expansive definitions not because they are fashionable, but because they are more useful for understanding the economic consequences of present forms of automation. See section “Terms and AI techniques used” for more details on how these terms are defined.)
  4. Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements, (Cambridge, UK: Cambridge University Press, 2010).
  5. Quoted figures are reported from subject organizations, not independent analyses
  6. As the case illustrates, the social and labor impacts can often cascade beyond the location of the AI implementation. Kate Crawford and Vladan Joler explore this concept extensively as it relates to the “vast planetary network” of labor, energy, and data to support small interactions with an Amazon Echo. See more at www.anatomyof.ai.
  7. An ‘AI-native’ refers to a company that was founded with a stated mission of leveraging artificial intelligence or machine learning as a key enabling technology. ‘AI-natives’ can build infrastructure from the ground-up without the need to shift from legacy systems (e.g., on-premise to cloud-based storage).
  8. For more, see “Is the Solow Paradox Back?”, McKinsey Quarterly, June 2018.
  9. We do not have a measure of hours worked to estimate the increase in labor productivity precisely.
  10. Some have argued that inequality could increase with the proliferation of AI in the long term. While we do not address this question, please see Joseph Stiglitz and Anton Korinek’s paper for more: “Artificial Intelligence and Its Implications for Income Distribution and Unemployment,” December 2017.
  11. It is not clear what the net-impact of AI on jobs will be in the near future. The McKinsey Global Institute estimates that “total full-time-equivalent-employment demand might remain flat, or even that there could be a slightly negative net impact on jobs by 2030,” yet demand for new types of jobs may increase, as seen with the advent of the personal computer in the late 20th century.
  12. This only includes scientists and research associates and does not account for data scientists, automation engineers, and lab technicians that support teams with their services.
  13. Zymergen is an “AI-native” company that was founded in 2013. As such, the company started its data storage in the cloud. All data infrastructure could be built with a clean slate and modern toolchains, making data exportation and analysis on cloud systems easier than it might be for an incumbent (such as Tata Steel Europe). The latter might be dependent on proprietary or embedded on-premise systems that were installed without these objectives in mind.
  14. Natural Language Processing, a popular subfield of AI
  15. CNN’s were tested as part of Zymergen’s broader recommendation engine and were also used in isolated cases within the lab (e.g. computer vision for plate readers).
  16. During the time of writing the case study in fall 2018, the company had raised $174M. On December 13, 2018, the company announced a $400M Series C round from multiple investors. See coverage of the announcement on Bloomberg and the Wall Street Journal.
  17. Our definition draws on the classic articulation of automation described by Parasuraman, Sheridan, and Wickens (2000): https://ieeexplore.ieee.org/document/844354
Table of Contents
1
2
3
4
5
6
7
8
9
10

Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System

PAI Staff

Overview

Overview 

This report was written by the staff of the Partnership on AI (PAI) and many of our Partner organizations, with particularly  input from the members of PAI’s Fairness, Transparency, and Accountability Working Group. Our work on this topic was initially prompted by California’s Senate Bill 10 (S.B. 10), which would mandate the purchase and use of statistical and machine learning risk assessment tools for pretrial detention decisions, but our work has subsequently expanded to assess the use of such software across the United States.

Though this document incorporated suggestions or direct authorship from around 30-40 of our partner organizations, it should not under any circumstances be read as representing the views of any specific member of the Partnership. Instead, it is an attempt to report the widely held views of the artificial intelligence research community as a whole.

The Partnership on AI is a 501(c)3 nonprofit organization established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

The Partnership’s activities are determined in collaboration with its coalition of over 80 members, including civil society groups, corporate developers and users of AI, and numerous academic artificial intelligence research labs. PAI aims to create a space for open conversation, the development of best practices, and coordination of technical research to ensure that AI is used for the benefit of humanity and society. Crucially, the Partnership is an independent organization; though supported and shaped by our Partner community, the Partnership is ultimately more than the sum of its parts and makes independent determinations to which its Partners collectively contribute, but never individually dictate. PAI provides administrative and project management support to Working Groups, oversees project selection, and provides financial resources or direct research support to projects as needs dictate.

The Partnership on AI is deeply grateful for the collaboration of so many colleagues in this endeavor and looks forward to further convening and undertaking the multi-stakeholder research needed to build best practices for the use of AI in this critical domain.

Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System

Overview

Executive Summary

Introduction

Minimum Requirements for the Responsible Deployment of Criminal Justice Risk Assessment Tools

Requirement 1: Training datasets must measure the intended variables

Requirement 2: Bias in statistical models must be measured and mitigated

Requirement 3: Tools must not conflate multiple distinct predictions

Requirement 4: Predictions and how they are made must be easily interpretable

Requirement 5: Tools should produce confidence estimates for their predictions

Requirement 6: Users of risk assessment tools must attend trainings on the nature and limitations of the tools

Requirement 7: Policymakers must ensure that public policy goals are appropriately reflected in these tools

Requirement 8: Tool designs, architectures, and training data must be open to research, review and criticismRequirement 8: Tool designs, architectures, and training data must be open to research, review and criticism

Requirement 9: Tools must support data retention and reproducibility to enable meaningful contestation and challenges

Requirement 10: Jurisdictions must take responsibility for the post-deployment evaluation, monitoring, and auditing of these tools

Conclusion

Sources Cited

  1. For example, many risk assessment tools assign individuals to decile ranks, converting their risk score into a rating from 1-10 which reflects whether they’re in the bottom 10% of risky individuals (1), the next highest 10% (2), and so on (3-10). Alternatively, risk categorization could be based on thresholds labeled as “low,” “medium,” or “high” risk.
  2. Whether this is the case depends on how one defines AI; it would be true under many but not all of the definitions surveyed for instance in Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern Approach, Prentice Hall, 2010, at 2. PAI considers more expansive definitions, that include any automation of analysis and decision making by humans, to be most helpful.
  3. In California, the recently enacted California Bail Reform Act (S.B. 10) mandates the implementation of risk assessment tools while eliminating money bail in the state, though implementation of the law has been put on hold as a result of a 2020 ballot measure funded by the bail bonds industry to repeal it; see https://ballotpedia.org/California_Replace_Cash_Bail_with_Risk_Assessments_Referendum_(2020); Robert Salonga, Law ending cash bail in California halted after referendum qualifies for 2020 ballot, San Jose Mercury News (Jan. 17, 2019), https://www.mercurynews.com/2019/01/17/law-ending-cash-bail-in-california-halted-after-referendum-qualifies-for-2020-ballot/. In addition, a new federal law, the First Step Act of 2018 (S. 3649), requires the Attorney General to review existing risk assessment tools and develop recommendations for “evidence-based recidivism reduction programs” and to “develop and release” a new risk- and needs- assessment system by July 2019 for use in managing the federal prison population. The bill allows the Attorney General to use currently-existing risk and needs assessment tools, as appropriate, in the development of this system.
  4. In addition, many of our civil society partners have taken a clear public stance to this effect, and some go further in suggesting that only individual-level decision-making will be adequate for this application regardless of the robustness and validity of risk assessment instruments. See The Use of Pretrial ‘Risk Assessment’ Instruments: A Shared Statement of Civil Rights Concerns, http://civilrightsdocs.info/pdf/criminal-justice/Pretrial-Risk-Assessment-Full.pdf (shared statement of 115 civil rights and technology policy organizations, arguing that all pretrial detention should follow from evidentiary hearings rather than machine learning determinations, on both procedural and accuracy grounds); see also Comments of Upturn; The Leadership Conference on Civil and Human Rights; The Leadership Conference Education Fund; NYU Law’s Center on Race, Inequality, and the Law; The AI Now Institute; Color Of Change; and Media Mobilizing Project on Proposed California Rules of Court 4.10 and 4.40, https://www.upturn.org/static/files/2018-12-14_Final-Coalition-Comment-on-SB10-Proposed-Rules.pdf (“Finding that the defendant shares characteristics with a collectively higher risk group is the most specific observation that risk assessment instruments can make about any person. Such a finding does not answer, or even address, the question of whether detention is the only way to reasonably assure that person’s reappearance or the preservation of public safety. That question must be asked specifically about the individual whose liberty is at stake — and it must be answered in the affirmative in order for detention to be constitutionally justifiable.”) PAI notes that the requirement for an individualized hearing before detention implicitly includes a need for timeliness. Many jurisdictions across the US have detention limits at 24 or 48 hours without hearings. Aspects of this stance are shared by some risk assessment tool makers; see, Arnold Ventures’ Statement of Principles on Pretrial Justice and Use of Pretrial Risk Assessment, https://craftmediabucket.s3.amazonaws.com/uploads/AV-Statement-of-Principles-on-Pretrial-Justice.pdf.
  5. See Ecological Fallacy section and Baseline D for further discussion of this topic.
  6. Quantitatively, accuracy is usually defined as the fraction of correct answers the model produces among all the answers it gives. So a model that answers correctly in 4 out of 5 cases would have an accuracy of 80%. Interestingly, models which predict rare phenomena (like violent criminality) can be incredibly accurate without being useful for their prediction tasks. For example, if only 1% of individuals will commit a violent crime, a model that predicts that no one will commit a violent crime will have 99% accuracy even though it does not correctly identify any of the cases where someone actually commits a violent crime. For this reason and others, evaluation of machine learning models is a complicated and subtle topic which is the subject of active research. In particular, note that inaccuracy can and should be subdivided into errors of “Type I” (false positive) and “Type II” (false negative) – one of which may be more acceptable than the other, depending on the context.
  7. Calibration is a property of models such that among the group they predict a 50% risk for, 50% of cases recidivate. Note that this says nothing about the accuracy of the prediction, because a coin toss would be calibrated in that sense. All risk assessment tools should be calibrated, butthere are more specific desirable properties such as calibration within groups (discussed in Requirement 2 below) that not all tools will or should satisfy completely.
  8. Sarah L. Desmarais, Evan M. Lowder, Pretrial Risk Assessment Tools: A Primer for Judges, Prosecutors, and Defense Attorneys, MacArthur Safety and Justice Challenge (Feb 2019). The issue of cross-comparison applies not only to geography but to time. It may be valuable to use comparisons over time to assist in measuring the validity of tools, though such evaluations must be corrected for the fact that crime in the United States is presently a rapidly changing (and still on the whole rapidly declining) phenomenon.
  9. As a technical matter, a model can be biased for subpopulations while being unbiased on average for the population as a whole.
  10. Note here that the phenomenon of societal bias—the existence of beliefs, expectations, institutions, or even self-propagating patterns of behavior that lead to unjust outcomes for some groups—is not always the same as, or reflected in statistical bias, and vice versa. One can instead think of these as an overlapping Venn diagram with a large intersection. Most of the concerns about risk assessment tools are about biases that are simultaneously statistical and societal, though there are some that are about purely societal bias. For instance, if non-uniform access to transportation (which is a societal bias) causes higher rates of failure to appear for court dates in some communities, the problem is a societal bias, but not a statistical one. The inclusion of demographic parity measurements as part of model bias measurement (see Requirement 2) may be a way to measure this, though really the best solutions involve distinct policy responses (for instance, providing transportation assistance for court dates or finding ways to improve transit to underserved communities).
  11. For instance, Eckhouse et al. propose a 3-level taxonomy of biases. Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook, and Julie Ciccolini, Layers of Bias: A Unified Approach for Understanding Problems with Risk Assessment, Criminal Justice and Behavior, (Nov 2018).
  12. Some of the experts within the Partnership oppose the use of risk assessment tools specifically because of their pessimism that sufficient data exists or could practically be collected to meet purposes (a) and (b).
  13. Moreover, defining recidivism is difficult in the pretrial context. Usually, recidivism variables are defined using a set time period, e.g., whether someone is arrested within 1 year of their initial arrest or whether someone is arrested within 3 years of their release from prison. In the pretrial context, recidivism is defined as whether the individual is arrested during the time after their arrest (or pretrial detention) and before the individual’s trial. That period of time, however, can vary significantly from case to case, so it is necessary to ensure that each risk assessment tool predicts an appropriately defined measure of recidivism or public safety risk.
  14. See, e.g., Report: The War on Marijuana in Black and White, ACLU (2013), https://www.aclu.org/report/report-war-marijuana-black-and-white; ACLU submission to Inter-American Commission on Human Rights, Hearing on Reports of Racism in the Justice System of the United States, https://www.aclu.org/sites/default/files/assets/141027_iachr_racial_disparities_aclu_submission_0.pdf, (Oct 2017); Samuel Gross, Maurice Possley, Klara Stephens, Race and Wrongful Convictions in the United States, National Registry of Exonerations, https://www.law.umich.edu/special/exoneration/Documents/Race_and_Wrongful_Convictions.pdf; but see Jennifer L. Skeem and Christopher Lowenkamp, Risk, Race & Recidivism: Predictive Bias and Disparate Impact, Criminology 54 (2016), 690, https://risk-resilience.berkeley.edu/sites/default/files/journal-articles/files/criminology_proofs_archive.pdf (For some categories of crime in some jurisdictions, victimization and self-reporting surveys imply crime rates are comparable to arrest rates across demographic groups; an explicit and transparent reweighting process is procedurally appropriate even in cases where the correction it results in is small).
  15. See David Robinson and John Logan Koepke, Stuck in a Pattern: Early evidence on ‘predictive policing’ and civil rights, (Aug. 2016). https://www.upturn.org/reports/2016/stuck-in-a-pattern/ (“Criminologists have long emphasized that crime reports, and other statistics gathered by the police, are not an accurate record of the crime that happens in a community. In short, the numbers are greatly influenced by what crimes citizens choose to report, the places police are sent on patrol, and how police decide to respond to the situations they encounter. The National Crime Victimization Survey (conducted by the Department of Justice) found that from 2006-2010, 52 percent of violent crime victimizations went unreported to police and 60 percent of household property crime victimizations went unreported. Historically, the National Crime Victimization Survey ‘has shown that police are not notified of about half of all rapes, robberies and aggravated assaults.’”) See also Kristian Lum and William Isaac, To predict and serve? (2016): 14-19.
  16. Carl B. Klockars, Some Really Cheap Ways of Measuring What Really Matters, in Measuring What Matters: Proceedings From the Policing Research Meetings, 195, 195-201 (1999), https://www.ncjrs.gov/pdffiles1/nij/170610.pdf. [https://perma.cc/BRP3-6Z79] (“If I had to select a single type of crime for which its true level—the level at which it is reported—and the police statistics that record it were virtually identical, it would be bank robbery. Those figures are likely to be identical because banks are geared in all sorts of ways…to aid in the reporting and recording of robberies and the identification of robbers. And, because mostly everyone takes bank robbery seriously, both Federal and local police are highly motivated to record such events.”)
  17. ACLU, The War on Marijuana in Black and White: Billions of Dollars Wasted on Racially Biased Arrests, (2013), available at https://www.aclu.org/files/assets/aclu-thewaronmarijuana-rel2.pdf.
  18. Lisa Stoltenberg & Stewart J. D’Alessio, Sex Differences in the Likelihood of Arrest, J. Crim. Justice 32 (5), 2004, 443-454; Lisa Stoltenberg, David Eitle & Stewart J. D’Alessio, Race and the Probability of Arrest, Social Forces 81(4) 2003 1381-1387; Tia Stevens & Merry Morash, Racial/Ethnic Disparities in Boys’ Probability of Arrest and Court Actions in 1980 and 2000: The Disproportionate Impact of ‘‘Getting Tough’’ on Crime, Youth and Juvenile Justice 13(1), (2014).
  19. Delbert S. Elliott, Lies, Damn Lies, and Arrest Statistics, (1995), http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.182.9427&rep=rep1&type=pdf, 11.
  20. Simply reminding people to appear improves appearance rates. Pretrial Justice Center for Courts, Use of Court Date Reminder Notices to Improve Court Appearance Rates, (Sept. 2017).
  21. There are a number of obstacles that risk assessment toolmakers have identified towards better predictions on this front. Firstly, there is a lack of consistent data and definitions to help disentangle willful flight from justice from failures to appear for reasons that are either unintentional or not indicative of public safety risk. Policymakers may need to take the lead in defining and collecting data on these reasons, as well as identifying interventions besides incarceration that may be most appropriate for responding to them.
  22. This is known in the algorithmic fairness literature as “fairness through unawareness”; see Moritz Hardt, Eric Price, & Nathan Srebro, Equality of Opportunity in Supervised Learning, Proc. NeurIPS 2016, https://arxiv.org/pdf/1610.02413.pdf, first publishing the term and citing earlier literature for proofs of its ineffectiveness, particularly Pedreshi, Ruggieri, & Turini, Discrimination-aware data mining, Knowledge Discovery & Data Mining, Proc. SIGKDD (2008), http://eprints.adm.unipi.it/2192/1/TR-07-19.pdf.gz. In other fields, blindness is the more common term for the idea of achieving fairness by ignoring protected class variables (e.g., “race-blind admissions” or “gender-blind hiring”).
  23. Another way of conceiving omitted variable bias is as follows: data-related biases as discussed in Requirement 1 are problems with the rows in a database or spreadsheet: the rows may contain asymmetrical errors, or not be a representative sample of events as they occur in the world. Omitted variable bias, in contrast, is a problem with not having enough or the right columns in a dataset.
  24. These specific examples are from the Equivant/Northpoint COMPAS risk assessment; see sample questionnaire at https://assets.documentcloud.org/documents/2702103/Sample-Risk-Assessment-COMPAS-CORE.pdf
  25. This list is by no means exhaustive. Another approach involves attempting to de-bias datasets by removing all information regarding the protected class variables. See, e.g., James E. Johndrow & Kristian Lum, An algorithm for removing sensitive information: application to race-independent recidivism prediction, (Mar. 15, 2017), https://arxiv.org/pdf/1703.04957.pdf. Not only would the protected class variable itself be removed but also variation in other variables that is correlated with the protected class variable. This would yield predictions that are independent of the protected class variables, but could have negative implications for accuracy. This method formalizes the notion of fairness known as “demographic parity,” and has the advantage of minimizing disparate impact, such that outcomes should be proportional across demographics. Similar to affirmative action, however, this approach would raise additional fairness questions given different baselines across demographics.
  26. See Moritz Hardt, Eric Price, & Nathan Srebro, Equality of Opportunity in Supervised Learning, Proc. NeurIPS 2016, https://arxiv.org/pdf/1610.02413.pdf.
  27. This is due to different baseline rates of recidivism for different demographic groups in U.S. criminal justice data. See J. Kleinberg, S. Mullainathan, M. Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. Proc. ITCS, (2017), https://arxiv.org/abs/1609.05807 and A. Chouldechova, Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Proc. FAT/ML 2016, https://arxiv.org/abs/1610.07524. Another caveat is that such a correction can reduce overall utility, as measured as a function of the number of individuals improperly detained or released. See, e.g., Sam Corbett-Davies et al., Algorithmic Decision-Making and the Cost of Fairness, (2017), https://arxiv.org/pdf/1701.08230.pdf.
  28. As long as the training data show higher arrest rates among minorities, statistically accurate scores must of mathematical necessity have a higher false positive rate for minorities. For a paper that outlines how equalizing FPRs (a measure of unfair treatment) requires creating some disparity in predictive accuracy across protected categories, see J. Kleinberg, S. Mullainathan, M. Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. Proc. ITCS, (2017), https://arxiv.org/abs/1609.05807; for arguments about the limitations of FPRs as a sole and sufficient metric, see e.g. Sam Corbett-Davies and Sharad Goel, The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning, working paper, https://arxiv.org/abs/1808.00023.
  29. Geoff Pleiss et al. On Fairness and Calibration (describing the challenges of using this approach when baselines are different), https://arxiv.org/pdf/1709.02012.pdf.
  30. The stance that unequal false positive rates represents material unfairness was popularized in a study by Julia Angwin et al. Machine Bias, ProPublica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, (2016), and confirmed in further detail in e.g, Julia Dressel and Hany Farid, The accuracy, fairness and limits of predicting recidivism, Science Advances, 4(1), (2018), http://advances.sciencemag.org/content/advances/4/1/eaao5580.full.pdf. Whether or not FPRs are the right measure of fairness is disputed within the statistics literature.
  31. See, e.g., Alexandra Chouldechova, Fair prediction with disparate impact: A study of bias in recidivism prediction instruments, Big Data 5(2), https://www.liebertpub.com/doi/full/10.1089/big.2016.0047, (2017).
  32. See, e.g., Niki Kilbertus et al., Avoiding Discrimination Through Causal Reasoning, (2018), https://arxiv.org/pdf/1706.02744.pdf.
  33. Formally, the toolmaker must distinguish “resolved” and “unresolved” discrimination. Unresolved discrimination results from a direct causal path between the protected class and predictor that is not blocked by a “resolving variable.” A resolving variable is one that is influenced by the protected class variable in a manner that we accept as nondiscriminatory. For example, if women are more likely to apply for graduate school in the humanities and men are more likely to apply for graduate school in STEM fields, and if humanities departments have lower acceptance rates, then women might exhibit lower acceptance rates overall even if conditional on department they have higher acceptance rates. In this case, the department variable can be considered a resolving variable if our main concern is discriminatory admissions practices. See, e.g., Niki Kilbertus et al., Avoiding Discrimination Through Causal Reasoning, (2018), https://arxiv.org/pdf/1706.02744.pdf.
  34. In addition to the trade-offs highlighted in this section, it should be noted that these methods require a precise taxonomy of protected classes. Although it is common in the United States to use simple taxonomies defined by the Office of Management and Budget (OMB) and the US Census Bureau, such taxonomies cannot capture the complex reality of race and ethnicity. See Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity, 62 Fed. Reg. 210 (Oct 1997), https://www.govinfo.gov/content/pkg/FR-1997-10-30/pdf/97-28653.pdf. Nonetheless, algorithms for bias correction have been proposed that detect groups of decision subjects with similar circumstances automatically. For an example of such an algorithm, see Tatsunori Hashimoto et al., Fairness Without Demographics in Repeated Loss Minimization, Proc. ICML 2018, http://proceedings.mlr.press/v80/hashimoto18a/hashimoto18a.pdf. Algorithms have also been developed to detect groups of people that are spatially or socially segregated. See, e.g., Sebastian Benthall & Bruce D. Haynes, Racial categories in machine learning, Proc. FAT* 2019, https://dl.acm.org/authorize.cfm?key=N675470. Further experimentation with these methods is warranted. For one evaluation, see Jon Kleinberg, An Impossibility Theorem for Clustering, Advances in Neural Information Processing Systems 15, NeurIPS 2002.
  35. The best way to do this deserves further research on human-computer interaction. For instance, if judges are shown multiple predictions labelled “zero disparate impact for those who will not reoffend”, “most accurate prediction,” “demographic parity,” etc, will they understand and respond appropriately? If not, decisions about what bias corrections to use might be better made at the level of policymakers or technical government experts evaluating these tools.
  36. Cost benefit models require explicit tradeoff choices to be made between different objectives including liberty, safety, and fair treatment of different categories of defendants. These choices should be explicit, and must be made transparently and accountably by policymakers. For a macroscopic example of such a calculation see David Roodman, The Impacts of Incarceration on Crime, Open Philanthropy Project report, September 2017, p p131, at https://www.openphilanthropy.org/files/Focus_Areas/Criminal_Justice_Reform/The_impacts_of_incarceration_on_crime_10.pdf.
  37. Sandra G. Mayson, Dangerous Defendants, 127 Yale L.J. 490, 509-510 (2018).
  38. Id., at 510. (“The two risks are different in kind, are best predicted by different variables, and are most effectively managed in different ways.”)
  39. For instance, needing childcare increases the risk of failure to appear (see Brian H. Bornsein, Alan J. Thomkins & Elizabeth N. Neely, Reducing Courts’ Failure to Appear Rate: A Procedural Justice Approach, U.S. DOJ report 234370, available at https://www.ncjrs.gov/pdffiles1/nij/grants/234370.pdf ) but is less likely to increase the risk of recidivism.
  40. For example, if the goal of a risk assessment tool is to advance the twin public policy goals of reducing incarceration and ensuring defendants appear for their court dates, then the tool should not conflate a defendant’s risk of knowingly fleeing justice with their risk of unintentionally failing to appear, since the latter can be mitigated by interventions besides incarceration (e.g. giving the defendant the opportunity to sign up for phone calls or SMS-based reminders about their court date, or ensuring the defendant has transportation to court on the day they are to appear).
  41. Notably, part of the holding in Loomis, mandated a disclosure in any Presentence Investigation Report that COMPAS risk assessment information “was not developed for use at sentencing, but was intended for use by the Department of Corrections in making determinations regarding treatment, supervision, and parole,” Wisconsin v. Loomis (881 N.W.2d 749).
  42. M.L. Cummings, Automation Bias in Intelligent Time Critical Decision Support Systems, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.2634&rep=rep1&type=pdf.
  43. It is important to note, however, that there is also evidence of the opposite phenomenon, whereby users might simply ignore the risk assessment tools’ predictions. In Christin’s ethnography of risk assessment users, she notes that professionals often “buffer” their professional judgment from the influence of automated tools. She quotes a former prosecutor as saying of risk assessment, “When I was a prosecutor I didn’t put much stock in it, I’d prefer to look at actual behaviors. I just didn’t know how these tests were administered, in which circumstances, with what kind of data.” From Christin, A., 2017, Algorithms in practice: Comparing web journalism and criminal justice, Big Data & Society, 4(2).
  44. See Wisconsin v. Loomis (881 N.W.2d 749).
  45. “Specifically, any PSI containing a COMPAS risk assessment must inform the sentencing court about the following cautions regarding a COMPAS risk assessment’s accuracy: (1) the proprietary nature of COMPAS has been invoked to prevent disclosure of information relating to how factors are weighed or how risk scores are to be determined; (2) risk assessment compares defendants to a national sample, but no cross- validation study for a Wisconsin population has yet been completed; (3) some studies of COMPAS risk assessment scores have raised questions about whether they disproportionately classify minority offenders as having a higher risk of recidivism; and (4) risk assessment tools must be constantly monitored and re-normed for accuracy due to changing populations and subpopulations.” Wisconsin v. Loomis (881 N.W.2d 749).
  46. Computer interfaces, even for simple tasks, can be highly confusing to users. For example, one study found that users failed to notice anomalies on a screen designed to show them choices they had previously selected for confirmation over 50% of the time, even after carefully redesigning the confirmation screen to maximize the visibility of anomalies. See Campbell, B. A., & Byrne, M. D. (2009). Now do voters notice review screen anomalies? A look at voting system usability, Proceedings of the 2009 Electronic Voting Technology Workshop/Workshop on Trustworthy Elections (EVT/WOTE ’09).
  47. This point depends on the number of input variables used for prediction. With a model that has a large number of features (such as COMPAS), it might be appropriate to use a method like gradient-boosted decision trees or random forests, and then provide the interpretation using an approximation. See Zach Lipton, The Mythos of Model Interpretability, Proc. ICML 2016, available at https://arxiv.org/pdf/1606.03490.pdf, §4.1. For examples of methods for providing explanations of complex models, see, e.g., Gilles Louppe et al., Understanding the variable importances in forests of randomized trees, Proc. NIPS 2013, available at https://papers.nips.cc/paper/4928-understanding-variable-importances-in-forests-of-randomized-trees.pdf; Marco Ribeiro, LIME – Local Interpretable
  48. Laurel Eckhouse et al., Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment, 46(2) Criminal Justice and Behavior 185–209 (2018), https://doi.org/10.1177/0093854818811379
  49. See id.
  50. See id.
  51. The lowest risk category for the Colorado Pretrial Assessment Tool (CPAT) included scores 0-17, while the highest risk category included a much broader range of scores: 51-82. In addition, the highest risk category corresponded to a Public Safety Rate of 58% and a Court Appearance Rate of 51%. Pretrial Justice Institute, (2013). Colorado Pretrial Assessment Tool (CPAT): Administration, scoring, and reporting manual, Version 1. Pretrial Justice Institute. Retrieved from http://capscolorado.org/yahoo_site_admin/assets/docs/CPAT_Manual_v1_-_PJI_2013.279135658.pdf
  52. User and usability studies such as those from the human-computer interaction field can be employed to study the question of how much deference judges give to pretrial or pre-sentencing investigations. For example, a study could examine how error bands affect judges’ inclination to follow predictions or (when they have other instincts) overrule them.
  53. As noted in Requirement 4, these mappings of probabilities to scores or risk categories are not necessarily intuitive, i.e. they are often not linear or might differ for different groups.
  54. In a simple machine learning prediction model, the tool might simply produce an output like “35% chance of recidivism.” A bootstrapped tool uses many resampled versions of the training datasets to make different predictions, allowing an output like, “It is 80% likely that this individual’s chance of recidivating is in the 20% – 50% range.” Of course these error bars are still relative to the training data, including any sampling or omitted variable biases it may reflect.
  55. The specific definition of fairness would depend on the fairness correction used.
  56. Humans are not naturally good at understanding probabilities or confidence estimates, though some training materials and games exist that can teach these skills; see eg: https://acritch.com/credence-game/
  57. To inform this future research, DeMichele et al.’s study conducting interviews with judges using the PSA tool can provide useful context for how judges understand and interpret these tools. DeMichele, Matthew and Comfort, Megan and Misra, Shilpi and Barrick, Kelle and Baumgartner, Peter, The Intuitive-Override Model: Nudging Judges Toward Pretrial Risk Assessment Instruments, (April 25, 2018). Available at SSRN: https://ssrn.com/abstract=3168500 or http://dx.doi.org/10.2139/ssrn.3168500;
  58. See the University of Washington’s Tech Policy Lab’s Diverse Voices methodology for a structured approach to inclusive requirements gathering. Magassa, Lassana, Meg Young, and Batya Friedman, Diverse Voices, (2017), http://techpolicylab.org/diversevoicesguide/.
  59. Such disclosures support public trust by revealing the existence and scope of a system, and by enabling challenges to the system’s role in government. See Pasquale, Frank. The black box society: The secret algorithms that control money and information. Harvard University Press, (2015). Certain legal requirements on government use of computers demand such disclosures. At the federal level, the Privacy Act of 1974 requires agencies to publish notices of the existence of any “system of records” and provides individuals access to their records. Similar data protection rules exist in many states and in Europe under the General Data Protection Regulation (GDPR).
  60. Reisman, Dillon, Jason Schultz, Kate Crawford, Meredith Whittaker, Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, AI Now Institute, (2018).
  61. See Cal. Crim. Code §§ 1320.24 (e) (7), 1320.25 (a), effective Oct 2020.
  62. First Step Act, H.R.5682 — 115th Congress (2017-2018).
  63. For further discussion on the social justice concerns related to using trade secret law to prevent the disclosure of the data and algorithms behind risk assessment tools, see Taylor R. Moore,Trade Secrets and Algorithms as Barriers to Social Justice, Center for Democracy and Technology (August 2017), https://cdt.org/files/2017/08/2017-07-31-Trade-Secret-Algorithms-as-Barriers-to-Social-Justice.pdf.
  64. Several countries already publish the details of their risk assessment models. See, e.g., Tollenaar, Nikolaj, et al. StatRec-Performance, validation and preservability of a static risk prediction instrument, Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique 129.1 (2016): 25-44 (in relation to the Netherlands); A Compendium of Research and Analysis on the Offender Assessment System (OaSys) (Robin Moore ed., Ministry of Justice Analytical Series, 2015) (in relation to the United Kingdom). Recent legislation also attempts to mandate transparency safeguards, see Idaho Legislature, House Bill No.118 (2019).
  65. See, e.g., Jeff Larson et al. How We Analyzed the COMPAS Recidivism Algorithm, ProPublica (May 23, 2016), https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm. For a sample of the research that became possible as a result of ProPublica’s data, see https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=propublica+fairness+broward. Data provided by Kentucky’s Administrative Office of the Courts has also enabled scholar’s to examine the impact of the implementation of the PSA tool in that state. Stevenson, Megan, Assessing Risk Assessment in Action (June 14, 2018). Minn. L. Rev, 103, Forthcoming; available at https://ssrn.com/abstract=3016088
  66. For an example of how a data analysis competition dealt with privacy concerns when releasing a dataset with highly sensitive information about individuals, see Ian Lundberg et al., Privacy, ethics, and data access: A case study of the Fragile Families Challenge (Sept. 1, 2018), https://arxiv.org/pdf/1809.00103.pdf.
  67. See Arvind Narayanan et al., A Precautionary Approach to Big Data Privacy (Mar. 19, 2015), http://randomwalker.info/publications/precautionary.pdf.
  68. See id. at p. 20 and 21 (describing how some sensitive datasets are only shared after the recipient completes a data use course, provides information about the recipient, and physically signs a data use agreement).
  69. For a discussion of the due process concerns that arise when information is withheld in the context of automated decision-making, see Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249 (2007), https://ssrn.com/abstract=1012360. See also, Paul Schwartz, Data Processing and Government Administration: The Failure of the American Legal Response to the Computer, 43 Hastings L. J. 1321 (1992).
  70. Additionally, the ability to reconstitute decisions evidences procedural regularity in critical decision processes and allows individuals to trust the integrity of automated systems even when they remain partially non-disclosed. See Joshua A. Kroll et al., Accountable algorithms, 165 U. Pa. L. Rev. 633 (2016).
  71. The ability to contest scores is not only important for defendant’s rights to adversarially challenge adverse information, but also for the ability of judges and other professionals to engage with the validity of the risk assessment outputs and develop trust in the technology. See Daniel Kluttz et al., Contestability and Professionals: From Explanations to Engagement with Algorithmic Systems (January 2019), https://dx.doi.org/10.2139/ssrn.3311894
  72. “Criteria tinkering” occurs when court clerks manipulate input values to obtain the score they think is correct for a particular defendant. See Hannah-Moffat, Kelly, Paula Maurutto, and Sarah Turnbull, Negotiated risk: Actuarial illusions and discretion in probation, 24.3 Canada J. of L. & Society/La Revue Canadienne Droit et Société 391 (2009). See also Angele Christin, Comparing Web Journalism and Criminal Justice, 4.2 Big Data & Society 1.
  73. For further guidance on how such audits and evaluations might be structured, see, AI Now Institute, Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, https://ainowinstitute.org/aiareport2018.pdf; Christian Sandvig et al., Auditing algorithms: Research methods for detecting discrimination on internet platform (2014).
  74. See John Logan Koepke and David G. Robinson, Danger Ahead: Risk Assessment and the Future of Bail Reform, 93 Wash. L. Rev. 1725 (2018).
  75. For a discussion Latanya Sweeney & Ji Su Yoo, De-anonymizing South Korean Resident Registration Numbers Shared in Prescription Data, Technology Science, (Sept. 29, 2015), https://techscience.org/a/2015092901. Techniques exist that can guarantee that re-identification is impossible. See the literature on methods for provable privacy, notably differential privacy. A good introduction is in Kobbi Nissim, Thomas Steinke, Alexandra Wood, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan, Differential Privacy: A Primer for a Non-technical Audience, http://privacytools.seas.harvard.edu/files/privacytools/files/pedagogical-document-dp_0.pdf.
  76. Brandon Buskey and Andrea Woods, Making Sense of Pretrial Risk Assessments, National Association of Criminal Defense Lawyers, (June 2018), https://www.nacdl.org/PretrialRiskAssessment. Human Rights Watch proposes a clear alternative: “The best way to reduce pretrial incarceration is to respect the presumption of innocence and stop jailing people who have not been convicted of a crime absent concrete evidence that they pose a serious and specific threat to others if they are released. Human Rights Watch recommends having strict rules requiring police to issue citations with orders to appear in court to people accused of misdemeanor and low-level, non-violent felonies, instead of arresting and jailing them. For people accused of more serious crimes, Human Rights Watch recommends that the release, detain, or bail decision be made following an adversarial hearing, with right to counsel, rules of evidence, an opportunity for both sides to present mitigating and aggravating evidence, a requirement that the prosecutor show sufficient evidence that the accused actually committed the crime, and high standards for showing specific, known danger if the accused is released, as opposed to relying on a statistical likelihood.” Human Rights Watch, Q & A: Profile Based Risk Assessment for US Pretrial Incarceration, Release Decisions, (June 1, 2018), https://www.hrw.org/news/2018/06/01/q-profile-based-risk-assessment-us-pretrial-incarceration-release-decisions.
Table of Contents
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16