Making AI Inclusive: 4 Guiding Principles for Ethical Engagement

Tina Park

Introduction

While the concept of “human-centered design” is hardly new to the technology sector, recent years have seen growing efforts to build inclusive artificial intelligence (AI) and machine learning (ML) products. Broadly, inclusive AI/ML refers to algorithmic systems which are created with the active engagement of and input from people who are not on AI/ML development teams. This includes both end users of the systems and non-users who are impacted by the systems.“Impacted non-user” refers to people who are impacted by the deployment of an AI/ML system, but are not the direct user or customer of that system. For example, in the case of students in the United Kingdom in 2020 whose A-level grades were determined by an algorithm, the “user” of the algorithmic system is Ofqual, the official exam regulator in England, and the students are “impacted non-users.” To collect this input, practitioners are increasingly turning to engagement practices like user experience (UX) research and participatory design.

Amid rising awareness of structural inequalities in our society, embracing inclusive research and design principles helps signal a commitment to equitable practices. As many proponents have pointed out, it also makes for good business: Understanding the needs of a more diverse set of people expands the market for a given product or service. Once engaged, these people can then further improve an AI/ML product, identifying issues like bias in algorithmic systems.

Despite these benefits, however, there remain significant challenges to greater adoption of inclusive development in the AI/ML field. There are also important opportunities. For AI practitioners, AI ethics researchers, and others interested in learning more about responsible AI, this Partnership on AI (PAI) white paper provides guidance to help better understand and overcome the challenges related to engaging stakeholders in AI/ML development.

Ambiguities around the meaning and goals of “inclusion” present one of the central challenges to AI/ML inclusion efforts. To make the changes needed for a more inclusive AI that centers equity, the field must first find agreement on foundational premises regarding inclusion. Recognizing this, this white paper provides four guiding principles for ethical engagement grounded in best practices:

  1. All participation is a form of labor that should be recognized
  2. Stakeholder engagement must address inherent power asymmetries
  3. Inclusion and participation can be integrated across all stages of the development lifecycle
  4. Inclusion and participation must be integrated to the application of other responsible AI principles

To realize ethical participatory engagement in practice, this white paper also offers three recommendations aligned with these principles for building inclusive AI:

  1. Allocate time and resources to promote inclusive development
  2. Adopt inclusive strategies before development begins
  3. Train towards an integrated understanding of ethics

This white paper’s insights are derived from the research study “Towards An Inclusive AI: Challenges and Opportunities for Public Engagement in AI Development.” That study drew upon discussions with industry experts, a multidisciplinary review of existing research on stakeholder and public engagement, and nearly 70 interviews with AI practitioners and researchers, as well as data scientists, UX researchers, and technologists working on AI and ML projects, over a third of whom were based in areas outside of the US, EU, UK, or Canada. Supplemental interviews with social equity and Diversity, Equity, and Inclusion (DEI) advocates contributed to the development of recommendations for individual practitioners, business team leaders, and the field of AI and ML more broadly.

This white paper does not provide a step-by-step guide for implementing specific participatory practices. It is intended to renew discussions on how to integrate a wider range of insights and experiences into AI/ML technologies, including those of both users and the people impacted (either directly or indirectly) by these technologies. Such conversations — between individuals, inside teams, and within organizations — must be had to spur the changes needed to develop truly inclusive AI.

Making AI Inclusive: 4 Guiding Principles for Ethical Engagement

Introduction

Guiding Principles for Ethical Participatory Engagement

Principle 1: All Participation Is a Form of Labor That Should Be Recognized

Principle 2: Stakeholder Engagement Must Address Inherent Power Asymmetries

Principle 3: Inclusion and Participation Can Be Integrated Across All Stages of the Development Lifecycle

Principle 4: Inclusion and Participation Must Be Integrated to the Application of Other Responsible AI Principles

Recommendations for Ethical Engagement in Practice

Recommendation 1: Allocate Time and Resources to Promote Inclusive Development

Recommendation 2: Adopt Inclusive Development Strategies Before Development Begins

Recommendation 3: Train Towards an Integrated Understanding of Ethics

Conclusion

Acknowledgements

Sources Cited

  1. Jean-Baptiste, A. (2020). Building for Everyone: Expand Your Market with Design Practices from Google’s Product Inclusion Team. John Wiley and Sons, Inc.
  2. Romao, M. (2019, June 27). “A vision for AI: Innovative, Trusted and Inclusive.” Policy@Intel. https://community.intel.com/t5/Blogs/Intel/Policy-Intel/A-vision-for-AI-Innovative-Trusted-and-Inclusive/post/1333103
  3. Zhou, A., Madras, D., Raji, D., Milli, S., Kulynych, B. and Zemel, R. (2020, July 17). “Participatory Approaches to Machine Learning.” (Workshop). International Conference on Machine Learning 2020.
  4. Lewis, J. E., Abdilla, A., Arista, N., Baker, K., Benesiinaabandan, S., Brown, M., ... and Whaanga, H. (2020). Indigenous protocol and artificial intelligence position paper. Indigenous AI. https://www.indigenous-ai.net/position-paper
  5. Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press.
  6. Hamraie, A., and Fritsch, K. (2019). “Crip technoscience manifesto.” Catalyst: Feminism, Theory, Technoscience, 5(1), 1-33. https://catalystjournal.org/index.php/catalyst/article/view/29607
  7. Taylor, L. (2017). “What is data justice? The case for connecting digital rights and freedoms globally.” Big Data and Society, 4(2), 2053951717736335. https://doi.org/10.1177/2053951717736335
  8. Benjamin, Ruha. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity. https://www.ruhabenjamin.com/race-after-technology
  9. Hanna, A., Denton, E., Smart, A., and Smith-Loud, J. (2020). “Towards a critical race methodology in algorithmic fairness.” In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 501-512). https://arxiv.org/abs/1912.03593
  10. Sloane, M., Moss, E., Awomolo, O. and Forlano, L. (2020). ''Participation is not a design fix for machine learning.'' arXiv. https://arxiv.org/abs/2007.02423
  11. Cifor, M., Garcia, P., Cowan, T.L., Rault, J., Sutherland, T., Chan, A., Rode, J., Hoffmann, A.L., Salehi, N. and Nakamura, L. (2019). “Feminist Data Manifest-No.” Feminist Data Manifest-No. Retrieved October 1, 2020 from https://www.manifestno.com/home
  12. Harrington, C., Erete, S. and Piper, A.M. (2019). “Deconstructing Community-Based Collaborative Design: Towards More Equitable Participatory Design Engagements.” In Proceedings of the ACM on Human-Computer Interaction 3(CSCW):1–25. https://doi.org/10.1145/3359318
  13. Freimuth V.S., Quinn, S.C., Thomas, S.B., Cole, G., Zook, E and Duncan, T. (2001). “African Americans’ Views on Research and the Tuskegee Syphilis Study.” Social Science and Medicine 52(5):797–808. https://doi.org/10.1016/S0277-9536(00)00178-7
  14. George, S., Duran, N. and Norris, K. (2014). “A Systematic Review of Barriers and Facilitators to Minority Research Participation Among African Americans, Latinos, Asian Americans, and Pacific Islanders.” American Journal of Public Health 104(2):e16–31. https://doi.org/10.2105/AJPH.2013.301706
  15. Barabas, C., Doyle, C., Rubinovitz, J.B., and Dinakar, K. (2020). “Studying Up: Reorienting the Study of Algorithmic Fairness around Issues of Power.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 167-176).
  16. Harrington, C., Erete, S. and Piper, A.M.. (2019). “Deconstructing Community-Based Collaborative Design: Towards More Equitable Participatory Design Engagements.” In Proceedings of the ACM on Human-Computer Interaction 3(CSCW):1–25. https://dl.acm.org/doi/10.1145/3359318.
  17. Chan, A., Okolo, C. T., Terner, Z., and Wang, A. (2021). “The Limits of Global Inclusion in AI Development.” arXiv. https://arxiv.org/abs/2102.01265
  18. Sanders, E. B. N. (2002). “From user-centered to participatory design approaches.” In Design and the social sciences (pp. 18-25). CRC Press. https://www.taylorfrancis.com/chapters/edit/10.1201/9780203301302-8/user-centered-participatory-design-approaches-elizabeth-sanders
  19. Leslie, D., Katell, M., Aitken, M., Singh, J., Briggs, M., Powell, R., ... and Burr, C. (2022). “Data Justice in Practice: A Guide for Developers.” arXiv. https://arxiv.org/ftp/arxiv/papers/2205/2205.01037.pdf
  20. Zdanowska, S., and Taylor, A. S. (2022). “A study of UX practitioners roles in designing real-world, enterprise ML systems.” In CHI Conference on Human Factors in Computing Systems (pp. 1-15). https://dl.acm.org/doi/abs/10.1145/3491102.3517607
  21. Leslie, D., Katell, M., Aitken, M., Singh, J., Briggs, M., Powell, R., ... and Burr, C. (2022). “Data Justice in Practice: A Guide for Developers.” arXiv. https://arxiv.org/ftp/arxiv/papers/2205/2205.01037.pdf
  22. Saulnier, L., Karamcheti, S., Laurençon, H., Tronchon, L., Wang, T., Sanh, V., Singh, A., Pistilli, G., Luccioni, S., Jernite, Y., Mitchell, M. and Kiela, D. (2022). “Putting Ethical Principles at the Core of the Research Lifecycle.” Hugging Face Blog. Retrieved from https://huggingface.co/blog/ethical-charter-multimodal
  23. Ada Lovelace Institute. (2021). “Participatory data stewardship: A framework for involving people in the use of data.” Ada Lovelace Institute. https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/
  24. Delgado, F., Yang, S., Madaio, M., and Yang, Q. (2021). “Stakeholder Participation in AI: Beyond ‘Add Diverse Stakeholders and Stir.’” arXiv. https://arxiv.org/pdf/2111.01122.pdf
  25. Sloane, M., Moss, E., Awomolo, O. and Forlano, L. (2020). ''Participation is not a design fix for machine learning.'' arXiv. https://arxiv.org/abs/2007.02423
Table of Contents
1
2
3
4
5
6