ABOUT ML Reference Document

Last Updated

To share your ideas, suggestions, and other feedback related to this evolving document, please reach out to Christine Custis, Head of ABOUT ML and Fairness, Transparency, and Accountability. Learn more about the origins of ABOUT ML and contributors to the project here.

Section 0: How to Use this Document

Section 0: How to Use This Document

This ABOUT ML Reference Document is a reference and foundational resource. Future contributions of the ABOUT ML work will include a PLAYBOOK of specifications, guides, recommendations, templates, and other meaningful artifacts to support ML documentation work by individuals in any and all of the roles listed below. Use cases made up of various artifacts from the PLAYBOOK along with other implementation instructions will be packaged as PILOTS for PAI Partners to try out in their organizations. Feedback from their use of these cases will further mature the artifacts in the PLAYBOOK and will support the ABOUT ML team’s continued, rigorous, scientific investigation of relevant research questions in the ML documentation space.

Recommended Reading Plan

Recommended Reading Plan

Based on the role a reader plays in their organization and/or the community of stakeholders they belong to, there are several different approaches for reading and using the information in this ABOUT ML Reference Document:

Role Recommendations
ML system developers/deployers ML system developers/deployers are encouraged to do a deep dive exploration of Section 3: Preliminary Synthesized Documentation Suggestions and use it to highlight gaps in their current understanding of both data- and model-related documentation and planning needs. This group will most benefit from further participation in the ABOUT ML effort by engaging with the community in the forthcoming online forum and by testing the efficacy and applicability of templates and specifications to be published in the PLAYBOOK and PILOTS, which will be developed based on use cases as an opportunity to implement ML documentation processes within an organization.
ML system procurers ML system procurers might explore Section 2.2: Documentation to Operationalize AI Ethics Goals to get ideas about what concepts to include as requirements for models and data in future requests for proposals relevant to ML systems. Additionally, they could use Section 2.3: Research Themes on Documentation for Transparency to shape conversations with the business owners and requirements writers to further elicit detailed key performance indicators and measures for success for any procured ML systems.
Users of ML system APIs and/or experienced end users of ML systems Users of ML system APIs and/or experienced end users of ML systems might skim the document and review all of the coral-colored Quick Guides to get a better understanding of how ML concepts are relevant to many of the tools they regularly use. A review of Section  2.1: Demand for Transparency and AI Ethics in ML systems will provide insight into conditions where it is appropriate to use ML systems. This section also explains how transparency is a foundation for both internal accountability among the developers, deployers, and API users of an ML system and external accountability to customers, impacted non-users, civil society organizations, and policymakers.
Internal compliance teams Internal compliance teams are encouraged to explore Section 4: Current Challenges of Implementing Documentation and use it to shape conversations with developer/deployment teams to find ways to measure compliance throughout the Machine Learning Lifecycle (MLLC).
External auditors External auditors could skim Appendix: Compiled List of Documentation Questions and familiarize themselves with high-level concepts as well as tactically operationalized tenets to look for in their determination of whether or not an ML System is well-documented.
Lay users of ML systems and/or members of low-income communities Lay users of ML systems and/or members of low-income communities might skim the document and review all of the blue-colored How We Define boxes in order to get an overarching understanding of the text’s contents. These users are encouraged to continue learning ABOUT ML systems by exploring how they might impact their everyday lives. Additional insights can be gathered from the Glossary section of this Reference Document.

Quick Guides

Quick Guides

Example

More information about a topic. Oftentimes, this will be a high-level and less academic expression of a term or concept.

Throughout this ABOUT ML Reference Document, we will use coral callout boxes with text to further explain a concept. This is a readability enhancement tactic recommended by our Diverse Voices panel and is meant to make the content more accessible and consumable to lay users of machine learning systems.

How We Define

How We Define

Example Term

We’ll use this space to give background definitions of terms and phrases and, in some cases, to call out existing work related to the ABOUT ML effort.

Throughout this ABOUT ML Reference Document, we will use the blue callout boxes with text to showcase our accepted (near-consensus) definition of a term or phrase. This is meant to give foundational background information to viewers of the document and also provides a baseline of understanding for any artifacts that may be derived from this work. Additional terms can be found in the glossary section. Future versions of this reference and/or artifacts in the forthcoming PLAYBOOK will explore audio/video offerings to support the consumption of this information by verbal/visual learners.

Contact for Support

Contact for Support

If you have any questions or would like to learn more about this effort, please reach out to us by:

Visiting our ABOUT ML page to make contributions to the work

ABOUT ML Reference Document

Section 0: How to Use this Document

Recommended Reading Plan

Quick Guides

How We Define

Contact for Support

Section 1: Project Overview

1.1 Statement of Importance for ABOUT ML Project

1.1.0 Importance of Transparency: Why a Company Motivated by the Bottom Line Should Adopt ABOUT ML Recommendations

1.1.1 About This Document and Version Numbering

1.1.2 ABOUT ML Goals and Plan

1.1.3 ABOUT ML Project Process and Timeline Overview

1.1.4 Who Is This Project For?

1.1.4.1 Audiences for the ABOUT ML Resources

1.1.4.2 Stakeholders That Should Be Consulted While Putting Together ABOUT ML Resources

1.1.4.3 Audiences for ABOUT ML Documentation Artifacts

1.1.4.4 Whose Voices Are Currently Reflected in ABOUT ML?

1.1.4.5 Origin Story

Section 2: Literature Review (Current Recommendations on Documentation for Transparency in the ML Lifecycle)

2.1 Demand for Transparency and AI Ethics in ML Systems 

2.2 Documentation to Operationalize AI Ethics Goals

2.2.1 Documentation as a Process in the ML Lifecycle

2.2.2 Key Process Considerations for Documentation

2.3 Research Themes on Documentation for Transparency 

2.3.1 System Design and Set Up

2.3.2 System Development

2.3.3 System Deployment

Section 3: Preliminary Synthesized Documentation Suggestions

3.4.1 Suggested Documentation Sections for Datasets

3.4.1.1 Data Specification

3.4.1.1.1 Motivation

3.4.1.2 Data Curation 

3.4.1.2.1 Collection

3.4.1.2.2 Processing

3.4.1.2.3 Composition

3.4.1.2.4 Types and Sources of Judgement Calls

3.4.1.3 Data Integration

3.4.1.3.1 Use

3.4.1.3.2 Distribution

3.4.1.4 Maintenance

3.4.2 Suggested Documentation Sections for Models

3.4.2.1 Model Specifications

3.4.2.2 Model Training

3.4.2.3 Evaluation

3.4.2.4 Model Integration

3.4.2.5 Maintenance

Section 4: Current Challenges of Implementing Documentation

Section 5: Conclusions

Version 0

Version 1

Appendix A: Compiled List of Documentation Questions 

Fact Sheets (Arnold et al. 2018)

Data Sheets (Gebru et al. 2018)

Model Cards (Mitchell et al. 2018)

A “Nutrition Label” for Privacy (Kelley et al. 2009)

The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards (Holland et al. 2019)

Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science (Bender and Friedman 2018)

Appendix B: Diverse Voices Process and Artifacts

Procurement Recruitment Email

Procurement Confirmation Email 

Appendix C: Glossary

Sources Cited

  1. Holstein, K., Vaughan, J.W., Daumé, H., Dudík, M., & Wallach, H.M. (2018). Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? CHI.
  2. Young, M., Magassa, L. and Friedman, B. (2019) Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology 21(2), 89-103.
  3. World Wide Web Consortium Process Document (W3C) process outlined here: https://www.w3.org/2019/Process-20190301/
  4. Internet Engineering Task Force (IETF) process outlined here: https://www.ietf.org/standards/process/
  5. The Web Hypertext Application Technology Working Group (WHATWG) process outlined here: https://whatwg.org/faq#process
  6. Oever, N., Moriarty, K. The Tao of IETF: A novice's guide to the Internet Engineering Task Force. https://www.ietf.org/about/participate/tao/.
  7. Young, M., Magassa, L. and Friedman, B. (2019) Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology 21(2), 89-103.
  8. Friedman, B, Kahn, Peter H., and Borning, A., (2008) Value sensitive design and information systems. In Kenneth Einar Himma and Herman T. Tavani (Eds.) The Handbook of Information and Computer Ethics., (pp. 70-100) John Wiley & Sons, Inc. http://jgustilo.pbworks.com/f/the-handbook-of-information-and-computer-ethics.pdf#page=104; Davis, J., and P. Nathan, L. (2015). Value sensitive design: applications, adaptations, and critiques. Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains. (pp. 11-40) DOI: 10.1007/978-94-007-6970-0_3. https://www.researchgate.net/publication/283744306_Value_Sensitive_Design_Applications_Adaptations_and_Critiques; Borning, A. and Muller, M. (2012). Next steps for value sensitive design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). (pp 1125-1134) DOI: https://doi.org/10.1145/2207676.2208560 https://dl.acm.org/citation.cfm?id=2208560
  9. Pichai, S., (2018). AI at Google: our principles. The Keyword. https://www.blog.google/technology/ai/ai-principles/; IBM’s Principles for Trust and Transparency. IBM Policy. https://www.ibm.com/blogs/policy/trust-principles/; Microsoft AI principles. Microsoft. https://www.microsoft.com/en-us/ai/our-approach-to-ai; Ethically Aligned Design – Version II. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
  10. Zeng, Y., Lu, E., and Huangfu, C. (2018) Linking artificial intelligence principles. CoRR https://arxiv.org/abs/1812.04814.
  11. essica Fjeld, Hannah Hilligoss, Nele Achten, Maia Levy Daniel, Sally Kagay, and Joshua Feldman, (2018). Principled artificial intelligence - a map of ethical and rights based approaches, Berkman Center for Internet and Society, https://ai-hr.cyber.harvard.edu/primp-viz.html
  12. Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial Intelligence: the global landscape of ethics guidelines. arXiv preprint arXiv:1906.11668. https://arxiv.org/pdf/1906.11668.pdf
  13. Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial Intelligence: the global landscape of ethics guidelines. arXiv preprint arXiv:1906.11668. https://arxiv.org/pdf/1906.11668.pdf
  14. Ananny, M., and Kate Crawford (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society 20 (3): 973-989.
  15. Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019, January). The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In Proceedings of the AAAI/ACM Conference on AI Ethics and Society, Honolulu, HI, USA (pp. 27-28). http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_188.pdf; Mittelstadt, B. (2019). AI Ethics–Too Principled to Fail? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293
  16. Greene, D., Hoffmann, A. L., & Stark, L. (2019, January). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/handle/10125/59651
  17. Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In AAAI/ACM Conf. on AI Ethics and Society (Vol. 1). https://www.media.mit.edu/publications/actionable-auditing-investigating-the-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products/
  18. Algorithmic Impact Assessment (2019) Government of Canada https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/algorithmic-impact-assessment.html
  19. Benjamin, M., Gagnon, P., Rostamzadeh, N., Pal, C., Bengio, Y., & Shee, A. (2019). Towards Standardization of Data Licenses: The Montreal Data License. arXiv preprint arXiv:1903.12262. https://arxiv.org/abs/1903.12262; Responsible AI Licenses v0.1. RAIL: Responsible AI Licenses. https://www.licenses.ai/ai-licenses
  20. See Citation 5
  21. Safe Face Pledge. https://www.safefacepledge.org/; Montreal Declaration on Responsible AI. Universite de Montreal. https://www.montrealdeclaration-responsibleai.com/; The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems. (2018). Amnesty International and Access Now. https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf ; Dagsthul Declaration on the application of machine learning and artificial intelligence for social good. https://www.dagstuhl.de/fileadmin/redaktion/Programm/Seminar/19082/Declaration/Declaration.pdf
  22. Dobbe, R., Dean, S., Gilbert, T., & Kohli, N. (2018). A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. https://arxiv.org/pdf/1807.00553.pdf
  23. Wagstaff, K. (2012). Machine learning that matters. https://arxiv.org/pdf/1206.4656.pdf ; Friedman, B., Kahn, P. H., Borning, A., & Huldtgren, A. (2013). Value sensitive design and information systems. In Early engagement and new technologies: Opening up the laboratory (pp. 55-95). Springer, Dordrecht. https://vsdesign.org/publications/pdf/non-scan-vsd-and-information-systems.pdf
  24. Dobbe, R., Dean, S., Gilbert, T., & Kohli, N. (2018). A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. https://arxiv.org/pdf/1807.00553.pdf
  25. Safe Face Pledge. https://www.safefacepledge.org/
  26. Montreal Declaration on Responsible AI. Universite de Montreal. https://www.montrealdeclaration-responsibleai.com/
  27. Diverse Voices How To Guide. Tech Policy Lab, University of Washington. https://techpolicylab.uw.edu/project/diverse-voices/
  28. Bender, E. M., & Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6, 587-604.
  29. Ethically Aligned Design – Version II. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
  30. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumeé III, H., & Crawford, K. (2018). Datasheets for datasets. https://arxiv.org/abs/1803.09010 https://arxiv.org/abs/1803.09010; Hazard Communication Standard: Safety Data Sheets. Occupational Safety and Health Administration, US Department of Labor. https://www.osha.gov/Publications/OSHA3514.html
  31. Holland, S., Hosny, A., Newman, S., Joseph, J., & Chmielinski, K. (2018). The dataset nutrition label: A framework to drive higher data quality standards. https://arxiv.org/abs/1805.03677; Kelley, P. G., Bresee, J., Cranor, L. F., & Reeder, R. W. (2009). A nutrition label for privacy. In Proceedings of the 5th Symposium on Usable Privacy and Security (p. 4). ACM. http://cups.cs.cmu.edu/soups/2009/proceedings/a4-kelley.pdf
  32. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... & Gebru, T. (2019, January). Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220-229). ACM. https://arxiv.org/abs/1810.03993
  33. Hind, M., Mehta, S., Mojsilovic, A., Nair, R., Ramamurthy, K. N., Olteanu, A., & Varshney, K. R. (2018). Increasing Trust in AI Services through Supplier's Declarations of Conformity. https://arxiv.org/abs/1808.07261
  34. Veale M., Van Kleek M., & Binns R. (2018) ‘Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making’ in Proceedings of the ACM Conference on Human Factors in Computing Systems, CHI 2018. https://arxiv.org/abs/1802.01029.
  35. Benjamin, M., Gagnon, P., Rostamzadeh, N., Pal, C., Bengio, Y., & Shee, A. (2019). Towards Standardization of Data Licenses: The Montreal Data License. https://arxiv.org/abs/1903.12262
  36. Cooper, D. M. (2013, April). A Licensing Approach to Regulation of Open Robotics. In Paper for presentation for We Robot: Getting down to business conference, Stanford Law School.
  37. Responsible AI Practices. Google AI. https://ai.google/education/responsible-ai-practices
  38. Everyday Ethics for Artificial Intelligence. (2019). IBM. https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf
  39. Federal Trade Commission. (2012). Best Practices for Common Uses of Facial Recognition Technologies (Staff Report). Federal Trade Commission, 30. https://www.ftc.gov/sites/default/files/documents/reports/facing-facts-best-practices-common-uses-facial-recognition-technologies/121022facialtechrpt.pdf
  40. Microsoft (2018). Responsible bots: 10 guidelines for developers of conversational AI. https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf
  41. Tramer, F., Atlidakis, V., Geambasu, R., Hsu, D., Hubaux, J. P., Humbert, M., ... & Lin, H. (2017, April). FairTest: Discovering unwarranted associations in data-driven applications. In 2017 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 401-416). IEEE. https://github.com/columbia/fairtest, https://www.mhumbert.com/publications/eurosp17.pdf
  42. Kishore Durg (2018). Testing AI: Teach and Test to raise responsible AI. Accenture Technology Blog. https://www.accenture.com/us-en/insights/technology/testing-AI
  43. Kush R. Varshney (2018). Introducing AI Fairness 360. IBM Research Blog. https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/
  44. Dave Gershgorn (2018). Facebook says it has a tool to detect bias in its artificial intelligence. Quartz. https://qz.com/1268520/facebook-says-it-has-a-tool-to-detect-bias-in-its-artificial-intelligence/
  45. James Wexler. (2018) The What-If Tool: Code-Free Probing of Machine Learning Models. Google AI Blog. https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html
  46. Miro Dudík, John Langford, Hanna Wallach, and Alekh Agarwal (2018). Machine Learning for fair decisions. Microsoft Research Blog. https://www.microsoft.com/en-us/research/blog/machine-learning-for-fair-decisions/
  47. Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that Remember: Model Inversion Attacks and Data Protection Law. Phil. Trans. R. Soc. A, 376, 20180083. https://doi.org/10/gfc63m
  48. Floridi, L. (2010, February). Information: A Very Short Introduction.
  49. Data Information Specialists Committee UK, 2007. http://www.disc-uk.org/qanda.html.
  50. Harwell, Drew. “Federal Study Confirms Racial Bias of Many Facial-Recognition Systems, Casts Doubt on Their Expanding Use.” The Washington Post, WP Company, 21 Dec. 2019, www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/
  51. Hildebrandt, M. (2019) ‘Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning’, Theoretical Inquiries in Law, 20(1) 83–121.
  52. D'Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., ... & Sculley, D. (2020). Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395.
  53. Selinger, E. (2019). ‘Why You Can’t Really Consent to Facebook’s Facial Recognition’, One Zero. https://onezero.medium.com/why-you-cant-really-consent-to-facebook-s-facial-recognition-6bb94ea1dc8f
  54. Lum, K., & Isaac, W. (2016). To predict and serve?. Significance, 13(5), 14-19. https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x
  55. LabelInsight (2016). “Drive Long-Term Trust & Loyalty Through Transparency”. https://www.labelinsight.com/Transparency-ROI-Study
  56. Crawford and Paglen, https://www.excavating.ai/
  57. Geva, Mor & Goldberg, Yoav & Berant, Jonathan. (2019). Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets. https://arxiv.org/pdf/1908.07898.pdf
  58. Bender, E. M., & Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6, 587-604.
  59. Desmond U. Patton et al (2017).
  60. See Cynthia Dwork et al.,
  61. Katta Spiel, Oliver L. Haimson, and Danielle Lottridge. (2019). How to do better with gender on surveys: a guide for HCI researchers. Interactions. 26, 4 (June 2019), 62-65. DOI: https://doi.org/10.1145/3338283
  62. A. Doan, A. Y. Halevy, and Z. G. Ives. Principles of Data Integration. Morgan Kaufmann, 2012
  63. Momin M. Malik. (2019). Can algorithms themselves be biased? Medium. https://medium.com/berkman-klein-center/can-algorithms-themselves-be-biased-cffecbf2302c
  64. Fire, Michael, and Carlos Guestrin (2019). “Over-Optimization of Academic Publishing Metrics: Observing Goodhart’s Law in Action.” GigaScience 8 (giz053). https://doi.org/10.1093/gigascience/giz053.
  65. Vogelsang, A., & Borg, M. (2019, September). Requirements engineering for machine learning: Perspectives from data scientists. In 2019 IEEE 27th International Requirements Engineering Conference Workshops (REW) (pp. 245-251). IEEE
  66. Eckersley, P. (2018). Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function). arXiv preprint arXiv:1901.00064.
  67. Partnership on AI. Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System, Requirement 5.
  68. Eckersley, P. (2018). Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function). arXiv preprint arXiv:1901.00064.https://arxiv.org/abs/1901.00064
  69. If it is not, there is likely a bug in the code. Checking a predictive model's performance on the training set cannot distinguish irreducible error (which comes from intrinsic variance of the system) from error introduced by bias and variance in the estimator; this is universal, and has nothing to do with different settings or
  70. Selbst, Andrew D. and Boyd, Danah and Friedler, Sorelle and Venkatasubramanian, Suresh and Vertesi, Janet (2018). “Fairness and Abstraction in Sociotechnical Systems”, ACM Conference on Fairness, Accountability, and Transparency (FAT*). https://ssrn.com/abstract=3265913
  71. Tools that can be used to explore and audit the predictive model fairness include FairML, Lime, IBM AI Fairness 360, SHAP, Google What-If Tool, and many others
  72. Wagstaff, K. (2012). Machine learning that matters. arXiv preprint arXiv:1206.4656. https://arxiv.org/abs/1206.4656
Table of Contents
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

Responsible Sourcing of Data Enrichment Services

PAI Staff

As AI becomes increasingly pervasive, there has been growing and warranted concern over the effects of this technology on society. To fully understand these effects, however, one must closely examine the AI development process itself, which impacts society both directly and through the models it creates. This white paper, “Responsible Sourcing of Data Enrichment Services,” addresses an often overlooked aspect of the development process and what AI practitioners can do to help improve it: the working conditions of data enrichment professionals, without whom the value being generated by AI would be impossible. This paper’s recommendations will be an integral part of the shared prosperity targets being developed by Partnership on AI (PAI) as outlined in the AI and Shared Prosperity Initiative’s Agenda.

High-precision AI models are dependent on clean and labeled datasets. While obtaining and enriching data so it can be used to train models is sometimes perceived as a simple means to an end, this process is highly labor-intensive and often requires data enrichment workers to review, classify, and otherwise manage massive amounts of data. Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face. This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind, which can have deleterious consequences for those being ignored.

There is, however, an opportunity to make a difference. The decisions AI developers make while procuring enriched data have a meaningful impact on the working conditions of data enrichment professionals. This paper focuses on how these sourcing decisions impact workers and proposes avenues for AI developers to meaningfully improve their working conditions, outlining key worker-oriented considerations that practitioners can use as a starting point to raise conversations with internal teams and vendors. Specifically, this paper covers worker-centric considerations for AI companies making decisions in:

  • selecting data enrichment providers,
  • running pilots,
  • designing data enrichment tasks and writing instructions,
  • assigning tasks,
  • defining payment terms and pricing,
  • establishing a communication cadence with workers,
  • conducting quality assurance,
  • and offboarding workers from a project.

This paper draws heavily on insights and input gathered during semi-structured interviews with members of the AI enrichment ecosystem conducted throughout 2020 as well as during a five-part workshop series held in the fall of 2020. The workshop series brought together more than 30 professionals from different areas of the data enrichment ecosystem, including representatives from data enrichment providers, researchers and product managers at AI companies, and leaders of civil society and labor organizations. We’d like to thank all of them for their engaged participation and for valuable feedback. We’d also like to thank Elonnai Hickok for serving as the lead researcher on the project and Heather Gadonniex for her committed support and championship. Finally, this work would not be possible without the invaluable guidance, expertise, and generosity of Mary Gray.

Our intention with this paper is to aid the industry in accounting for wellbeing when making decisions about data enrichment and to set the stage for further conversations within and across AI organizations. Additional work is needed to ensure industry practices recognize, appreciate, and fairly compensate the workers conducting data enrichment. To that end, we want to use this paper as an opportunity to increase awareness amongst practitioners and launch a series of conversations. We recognize that there is a lot of variance in practices across the industry and hope to start a productive dialogue with organizations across the spectrum who are working through these questions. If you work at a company involved in building AI and want to host a conversation with your colleagues around data enrichment practices, we would love to join and help facilitate the conversation. If you are interested, please get in touch here.

To read “Responsible Sourcing of Data Enrichment Services” in full, click here.

1

Redesigning AI for Shared Prosperity: an Agenda

PAI Staff

Artificial intelligence is expected to contribute trillions of dollars to the global GDP over the coming decade, but these gains may not occur equitably or be shared widely. Today, many communities around the world face persistent underemployment, driven in part by technological advances that have divided workers into cohorts of haves and have nots. If AI advancement continues on its current trajectory, it could accelerate this economic exclusion.

This is not the only trajectory AI could be on: switching the emphasis from automating human tasks to genuinely complementing human workers can help raise these workers’ productivity while making jobs safer, more stable and rewarding, and less physically exhausting. Redesigning AI for Shared Prosperity: an Agenda is a foundational document of the AI and Shared Prosperity Initiative outlining practical questions stakeholders need to collectively find answers to in order to successfully steer AI toward expanding access to good jobs—and away from eliminating them. We are sharing this living Agenda with the community to inform aligned efforts and invite all interested stakeholders to partake in the work. (Read our press release on the Agenda here.)

The Agenda, developed under the close guidance of the Initiative’s Steering Committee and based on their deliberations, calls for the creation of shared prosperity targets: verifiable criteria the AI industry must meet to support the future of workers. These targets would consist of commitments by AI companies to create (and not destroy) good jobs—well-paying, stable, honored, and empowered ones—across the globe. The commitments could be adopted by the AI industry players either voluntarily or with regulatory encouragement.

To date, no metrics have been developed to assess the impacts of AI on job availability, wages, and quality. Additionally, no targets have been set to ensure new products do not harm workers, either in aggregate or by category of potential vulnerability. Without clear metrics and commitments, efforts to steer AI in directions that benefit workers and society are susceptible to unbacked claims of human complementarity or human augmentation. Currently, such claims are frequently made by organizations that, in reality, produce job-displacing technology or employ worker-exploiting tactics (such as invasive surveillance) to produce productivity gains. We expect that organizations genuinely seeking to complement and benefit workers with their technology, would be most interested in measuring and disclosing their impact on availability of good jobs, helping differentiate themselves from industry actors seeking to sell worker exploitation-enabling technologies masked as “worker-augmenting AI”.

The success of the targets to be developed relies on their support by critical stakeholders in the AI development and implementation ecosystem: workers, private sector stakeholders, governments, and international organizations. Support within and across multiple stakeholder categories is particularly important given the diffuse nature of AI’s development and deployment: technologies are often created in separate companies and separate geographies than where they are implemented. Directing AI in service of expanding access to good jobs offers opportunities as well as complex challenges for each set of stakeholders. The Agenda outlines questions that need to be resolved in order to align the incentives, interests, and relative powers of key stakeholders in pursuit of a shared prosperity-advancing path for AI.

As an immediate next step, the Initiative is working to conduct thorough research on workers’ experiences of AI in the workplace. The research aims to identify key categories of impact on job quality to be included in the shared prosperity targets, as well as the most effective ways to empower workers throughout the AI development and deployment process. If you are an employer or worker organizing group who would potentially be interested in participating in this research, please get in touch to learn more about our research and how you can contribute.

It is our hope that this Agenda will catalyze the research and debates around automation, the future of work, and the equitable distribution of the economic gains of AI, and specifically on steering AI’s progress to reduce inequality and support sustainable economic and social development. PAI also enthusiastically invites collaboration on the design of shared prosperity targets. For more information on the AI and Shared Prosperity Initiative and how to get involved, please visit shared-prosperity-initiative.

To read the Agenda’s Executive Summary, click here. To read “Redesigning AI for Shared Prosperity: an Agenda” in full, click here.

1

Managing the Risks of AI Research: Six Recommendations for Responsible Publication

PAI Staff

Once a niche research interest, artificial intelligence (AI) has quickly become a pervasive aspect of society with increasing influence over our lives. In turn, open questions about this technology have, in recent years, transformed into urgent ethical considerations. The Partnership on AI’s (PAI) new white paper, “Managing the Risks of AI Research: Six Recommendations for Responsible Publication,” addresses one such question: Given AI’s potential for misuse, how can AI research be disseminated responsibly?

Many research communities, such as biosecurity and cybersecurity, routinely work with information that could be used to cause harm, either maliciously or accidentally. These fields have thus established their own norms and procedures for publishing high-risk research. Thanks to breakthrough advances, AI technology has progressed rapidly in the past decade, giving the AI community less time to develop similar practices.

Recent pilots, such as OpenAI’s “staged release” of GPT-2 and the “broader impact statement” requirement at the 2020 NeurIPS conference, demonstrate a growing interest in responsible AI publication norms. Effectively anticipating and mitigating the potential negative impacts of AI research, however, will require a community-wide effort. As a first step towards developing responsible publication practices, this white paper provides recommendations for three key groups in the AI research ecosystem:

  • Individual researchers, who should disclose and report additional information in their papers and normalize discussion about the downstream consequences of research.
  • Research leadership, which should review potential downstream consequences earlier in the research pipeline and commend researchers who identify negative downstream consequences.
  • Conferences and journals, which should expand peer review criteria to include engagement with potential downstream consequences and establish separate review processes to evaluate papers based on risk and downstream consequences.

Additionally, this white paper includes an appendix which seeks to disambiguate a variety of terms related to responsible research which are often conflated: “research integrity,” “research ethics,” “research culture,” “downstream consequences,” and “broader impacts.”

This document represents an artifact that can be used as a basis for further discussion, and we seek feedback on it to inform future iterations of the recommendations it contains. Our aim is to help build our capacity as a field to anticipate downstream consequences and mitigate potential risks.

To read “Managing the Risks of AI Research: Six Recommendations for Responsible Publication” in full, click here.

1

Framework for Promoting Workforce Well-being in the AI-Integrated Workplace

PAI Staff

Executive Summary

Executive Summary

The Partnership on AI’s “Framework for Promoting Workforce Well-being in the AI- Integrated Workplace” provides a conceptual framework and a set of tools to guide employers, workers, and other stakeholders towards promoting workforce well-being throughout the process of introducing AI into the workplace.

As AI technologies become increasingly prevalent in the workplace, our goal is to place  workforce well-being at the center of this technological change and resulting metamorphosis in work, well-being, and society, and provide a starting point to discuss and create pragmatic solutions.

The paper categorizes aspects of workforce well-being that should be prioritized and protected throughout AI integration into six pillars. Human rights is the first pillar, and supports all aspects of workforce well-being. The five additional pillars of well-being include physical, financial, intellectual, emotional well-being, as well as purpose and meaning. The Framework presents a set of recommendations that organizations can use to guide organizational thinking about promoting well-being throughout the integration of AI in the workplace.

The Framework is designed to initiate and inform discussions on the impact of AI that strengthen the reciprocal obligations between workers and employers, while grounding that discourse in six pillars of worker well-being.

We recognize that the impacts of AI are still emerging and often difficult to distinguish from the impact of broader digital transformation, leading to organizations being challenged to address the unknown and potentially fundamental changes that AI may bring to the workplace. We strongly advise that management collaborate with workers directly or with worker representatives in the development, integration, and use of AI systems in the workplace, as well as in the discussion and implementation of this Framework.

We acknowledge that the contexts for having a dialogue on worker well-being may differ. For instance, in some countries there are formal structures in place such as workers’ councils that facilitate the dialogue between employers and workers. In other cases, countries or sectors do not have these institutions in place, nor a tradition for dialogue between the two parties. In all cases, the aim of this Framework is to be a useful tool for all parties to collaboratively ensure that the introduction of AI technologies goes hand in hand with a commitment to worker well-being. The importance of making such commitment in earnest has been highlighted by the COVID-19 public health and economic crises which exposed and exacerbated the long-standing inequities in the treatment of workers. Making sure those are not perpetuated further with the introduction of AI systems into the workplace requires deliberate efforts and will not happen automatically.

Recommendations

Recommendations

This section articulates a set of recommendations to guide organizational approaches and thinking on what to promote, what to be cognizant of, and what to protect against, in terms of worker and workforce well-being while integrating AI into the workplace. These recommendations are organized along the six well-being pillars identified above, and are meant to serve as a starting place for organizations seeking to apply the present Framework to promote workforce well-being throughout the process of AI integration. Ideally, these can be recognized formally as organizational commitments at the board level and subsequently discussed openly and regularly with the entire organization.

The “Framework for Promoting Workforce Well-Being in the AI-Integrated Workplace” is a product of the Partnership on AI’s AI, Labor, and the Economy (AILE) Expert Group, formed through a collaborative process of research, scoping, and iteration. In August 2019, at a workshop called “Workforce Well-being in the AI-Integrated Workplace” co-hosted by PAI and the Ford Foundation, this work received additional input from experts, academics, industry, labor unions, and civil society. Though this document reflects the inputs of many PAI Partner organizations, it should not under any circumstances be read as representing the views of any particular organization or individual within this Expert Group, or any specific PAI Partner.

Acknowledgements

The Partnership on AI is deeply grateful for the input of many colleagues and partners, especially Elonnai Hickok, Ann Skeet, Christina Colclough, Richard Zuroff, Jonnie Penn as well as the participants of the August 2019 workshop co-hosted by PAI and the Ford Foundation. We thank Arindrajit Basu, Pranav Bidaire, and Saumyaa Naidu for the research support.

1

The Ethics of AI and Emotional Intelligence

PAI Staff

About the Paper

About The Paper

2019 seemed to mark a turning point in the deployment and public awareness of artificial intelligence designed to recognize emotions and expressions of emotion. The experimental use of AI spread across sectors and moved beyond the internet into the physical world. Stores used AI perceptions of shoppers’ moods and interest to display personalized public ads. Schools used AI to quantify student joy and engagement in the classroom. Employers used AI to evaluate job applicants’ moods and emotional reactions in automated video interviews and to monitor employees’ facial expressions in customer service positions.

It was a year notable for increasing criticism and governance of AI related to emotion and affect. A widely cited review of the literature by Barrett and colleagues questioned the underlying science for the universality of facial expressions and concluded there are insurmountable difficulties in inferring specific emotions reliably from pictures of faces. Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Corrigendum: Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(3), 165–166. https://doi.org/10.1177/1529100619889954 The affective computing conference ACII added its first panel on the misuses of the technology with the aim of increasing discussions within the technical community on how to improve how their research was impacting society. Valstar, M., Gratch, J., Tao, J., Greene, G., & Picard, P. (2019, September 4). Affective computing and the misuse of “our” technology/science [Panel]. 8th International Conference on Affective Computing & Intelligent Interaction, Cambridge, United Kingdom. Surveys on public attitudes in the U.S. Only 15% of Americans polled said it was acceptable for advertisers to use facial recognition technology to see how people respond to public advertising displays. It is unclear whether the 54% of respondents who said it was not acceptable were objecting to the use of facial analysis to detect emotional reaction to ads or the association of identification of an individual through facial recognition with some method of detecting emotional response. See Smith, A. (2019, September 5). More than half of U.S. adults trust law enforcement to use facial recognition responsibly. Pew Research Center. https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-sadults-trust-law-enforcement-to-use-facial-recognition-responsibly/ and the U.K. Only 4% of those polled in the U.K. approved of analysing faces (using “facial recognition technologies”, which the report defined as including detecting affect) to monitor personality traits and mood of candidates when hiring. Ada Lovelace Institute (2019, September). Beyond face value: public attitudes to facial recognition technology [Report], 11. Retrieved from https://www.adalovelaceinstitute.org/wp-content/uploads/2019/09/Public-attitudes-to-facial-recognition-technology_v.FINAL_.pdf found that almost all of those polled found some current advertising and hiring uses of mood detection unacceptable. Some U.S. cities and states started to regulate private SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121 See also proposed housing bills, No Biometrics Barriers to Housing Act. https://drive.google.com/file/d/1w4ee-poGkDJUkcEMTEAVqHNunplvR087/view [proposed U.S. federal] and Senate bill S5687 [proposed New York state] https://legislation.nysenate.gov/pdf/bills/2019/S5687 and government See Bill S.1385 [MA face recognition bill in process, as of June 23, 2020]. https://malegislature.gov/Bills/191/S1385/Bills/Joint and AB-1215 Body Camera Accountability Act [Bill enacted in CA] https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201920200AB1215. use of AI related to affect and emotions, including restrictions on them in some data protection legislation and face recognition moratoria. For example, the California Consumer Privacy Act (CCPA), which went into effect January 1, 2020, gives Californians the right to notification about what kinds of data a business is collecting about them and how it is being used and the right to demand that businesses delete their biometric information. The CCPA gives rights to California residents against a corporation or other legal entity operating for the financial benefit of its owners doing business in California that meets a certain revenue or data volume threshold. SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121 Biometric information, as defined in the CCPA, includes many kinds of data that are used to make inferences about emotion or affective state, including imagery of the iris, retina, and face, voice recordings, and keystroke and gait patterns and rhythms. California Consumer Privacy Act of 2018, AB-375 (2018).

All of this is happening against a backdrop of increasing global discussions, reports, principles, white papers, and government action on responsible, ethical, and trustworthy AI. The OECD’s AI Principles, adopted in May 2019 and supported by more than 40 countries, aimed to ensure AI systems would be designed to be robust, safe, fair and trustworthy. Forty-two countries adopt new OECD Principles on Artificial Intelligence. OECD. Retrieved March 22, 2019, from https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.html In February, 2020, the European Commission released a white paper, “On Artificial Intelligence – A European approach to excellence and trust”, setting out policy options for the twin objectives of promoting the uptake of AI and addressing the risks associated with certain uses of AI. European Commission. White paper On artificial intelligence – A European approach to excellence and trust, 1. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf In June 2020, the G7 nations and eight other countries launched the Global Partnership on AI, a coalition aimed at ensuring that artificial intelligence is used responsibly, and respects human rights and democratic values. Joint statement from founding members of the global partnership on artificial intelligence. Government of Canada. Retrieved July 23, 2020, from https://www.canada.ca/en/innovation-science-economic-development/news/2020/06/joint-statement-from-foundingmembers-of-the-global-partnership-on-artificial-intelligence.html

At its best, if artificial intelligence is able to help individuals better understand and control their own emotional and affective states, including fear, happiness, loneliness, anger, interest and alertness, there is enormous potential for good. It could greatly improve quality of life and help individuals meet long term goals. It could save many lives now lost to suicide, homicide, disease, and accident. It might help us get through the global pandemic and economic crisis.

At its worst, if artificial intelligence can automate the ability to read or control others’ emotions, it has substantial implications for economic and political power and individuals’ rights.

Governments are thinking hard about AI strategy, policy, and ethics. Now is the time for a broader public debate about the ethics of artificial intelligence and emotional intelligence, while those policies are being written, and while the use of AI for emotions and affect is not yet well entrenched in society. Applications are broad, across many sectors, but most are still in early stages of use.

The Ethics of AI and Emotional Intelligence

About the Paper

Sources Cited

  1. Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Corrigendum: Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(3), 165–166. https://doi.org/10.1177/1529100619889954
  2. Valstar, M., Gratch, J., Tao, J., Greene, G., & Picard, P. (2019, September 4). Affective computing and the misuse of “our” technology/science [Panel]. 8th International Conference on Affective Computing & Intelligent Interaction, Cambridge, United Kingdom.
  3. Only 15% of Americans polled said it was acceptable for advertisers to use facial recognition technology to see how people respond to public advertising displays. It is unclear whether the 54% of respondents who said it was not acceptable were objecting to the use of facial analysis to detect emotional reaction to ads or the association of identification of an individual through facial recognition with some method of detecting emotional response. See Smith, A. (2019, September 5). More than half of U.S. adults trust law enforcement to use facial recognition responsibly. Pew Research Center. https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-sadults-trust-law-enforcement-to-use-facial-recognition-responsibly/
  4. Only 4% of those polled in the U.K. approved of analysing faces (using “facial recognition technologies”, which the report defined as including detecting affect) to monitor personality traits and mood of candidates when hiring. Ada Lovelace Institute (2019, September). Beyond face value: public attitudes to facial recognition technology [Report], 11. Retrieved from https://www.adalovelaceinstitute.org/wp-content/uploads/2019/09/Public-attitudes-to-facial-recognition-technology_v.FINAL_.pdf
  5. SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121 See also proposed housing bills, No Biometrics Barriers to Housing Act. https://drive.google.com/file/d/1w4ee-poGkDJUkcEMTEAVqHNunplvR087/view [proposed U.S. federal] and Senate bill S5687 [proposed New York state] https://legislation.nysenate.gov/pdf/bills/2019/S5687
  6. See Bill S.1385 [MA face recognition bill in process, as of June 23, 2020]. https://malegislature.gov/Bills/191/S1385/Bills/Joint and AB-1215 Body Camera Accountability Act [Bill enacted in CA] https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201920200AB1215.
  7. The CCPA gives rights to California residents against a corporation or other legal entity operating for the financial benefit of its owners doing business in California that meets a certain revenue or data volume threshold. SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121
  8. California Consumer Privacy Act of 2018, AB-375 (2018).
  9. Forty-two countries adopt new OECD Principles on Artificial Intelligence. OECD. Retrieved March 22, 2019, from https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.html
  10. European Commission. White paper On artificial intelligence – A European approach to excellence and trust, 1. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
  11. Joint statement from founding members of the global partnership on artificial intelligence. Government of Canada. Retrieved July 23, 2020, from https://www.canada.ca/en/innovation-science-economic-development/news/2020/06/joint-statement-from-foundingmembers-of-the-global-partnership-on-artificial-intelligence.html
Table of Contents
1
2

On the Legal Compatibility of Fairness Definitions

PAI Staff

Past literature has been effective in demonstrating ideological gaps in machine learning (ML) fairness definitions when considering their use in complex socio-technical systems. However, we go further to demonstrate that these definitions often misunderstand the legal concepts from which they purport to be inspired, and consequently inappropriately co-opt legal language. In this paper, we demonstrate examples of this misalignment and discuss the differences in ML terminology and their legal counterparts, as well as what both the legal and ML fairness communities can learn from these tensions. We focus this paper on U.S. anti-discrimination law since the ML fairness research community regularly references terms from this body of law.

READ THE BLOG POST  READ THE PAPER

1

Bringing Facial Recognition Systems To Light

PAI Staff

An Introduction to PAI’s Facial Recognition Systems Project

An Introduction to PAI’s Facial Recognition Systems Project

Facial recognition. What do you think of when you hear that term? How do these systems know your name? How accurate are they? And what else can they tell you about someone whose image is in the system?

These questions and others led the Partnership on AI (PAI) to begin the facial recognition systems project. During a series of workshops with our partners, we discovered it was first necessary to grasp how these systems work. The result was PAI’s paper “Understanding Facial Recognition Systems,” which defines the technology used in systems that attempt to verify who someone says they are or identify who someone is.

A productive discussion about the roles of these systems in society starts when we speak the same language, and also understand the importance and meaning of technical terms such as “training the system,” “enrollment database,” and “match thresholds.”

Let’s begin — keeping in mind that the graphics below do not represent any specific system, and are meant only to illustrate how the technology works.

How Facial Recognition Systems Work

How Facial Recognition Systems Work

Understanding how facial recognition systems work is essential to being able to examine the technical, social & cultural implications of these systems.

Let’s describe how a facial recognition system works. First, the system detects whether an image contains a face. If so, it then tries to recognize the face in one of two ways:

During facial verification: The system attempts to verify the identity of the face. It does so by determining whether the face in the image matches a specific face previously stored in the system.

During facial identification: The system attempts to predict the identity of the face. It does so by determining whether the face in the image potentially matches any of the faces previously stored in the system.

Let’s look at these steps in greater detail

A facial recognition system needs to first be trained, with two main factors influencing how the system performs: firstly, the quality of images (such as the angle, lighting, and resolution) and secondly the diversity of the faces in the dataset used to train the system.

An enrollment database consisting of faces and names is also created. The faces can also be stored in the form of templates.

The first step in using any facial recognition system is when a probe image, derived from either a photo or a video, is submitted to the system. The system then detects the face in the image and creates a template.

 

There are two paths that can be taken

The template derived from the probe image can be compared to a single template in the enrollment database. This “1:1” process is called facial verification.

Alternatively, the template derived from the probe image can be compared to all templates in the enrollment database. This “1:MANY” process is called facial identification.

 

Click and drag the slider to see the importance of match thresholds



Beyond facial recognition

Sometimes facial recognition systems are described as including facial characterization (also called facial analysis) systems, which detect facial attributes in an image, and then sort the faces by categories such as gender, race, or age. These systems are not part of facial recognition systems because they are not used to verify or predict an identity.

1

Explainable Machine Learning in Deployment

PAI Staff

Organizations and policymakers around the world are turning to Explainable AI (XAI) as a means of addressing a range of AI ethics concerns. PAI’s recent research paper, Explainable Machine Learning in Deployment, is the first to examine how ML explainability techniques are actually being used. We find that in its current state, XAI best serves as an internal resource for engineers and developers, rather than for providing explanations to end users. Additional improvements to XAI techniques are necessary in order for them to truly work as intended, and help end users, policymakers, and other external stakeholders understand and evaluate automated decisions.

READ THE BLOG POST  READ THE PAPER

1