Guiding Principles for Ethical Participatory Engagement


There is strong consensusJean-Baptiste, A. (2020). Building for Everyone: Expand Your Market with Design Practices from Google’s Product Inclusion Team. John Wiley and Sons, Inc.Romao, M. (2019, June 27). “A vision for AI: Innovative, Trusted and Inclusive.” Policy@Intel. https://community.intel.com/t5/Blogs/Intel/Policy-Intel/A-vision-for-AI-Innovative-Trusted-and-Inclusive/post/1333103Zhou, A., Madras, D., Raji, D., Milli, S., Kulynych, B. and Zemel, R. (2020, July 17). “Participatory Approaches to Machine Learning.” (Workshop). International Conference on Machine Learning 2020. that the inclusion of a diverse body of end-users and other stakeholders in the creation of new technology is good for both improving the quality and usability of a product or service and mitigating possible emergent harms. However, even when AI/ML practitioners make efforts to increase inclusion or participation, unclear definitions of these concepts can create a mismatch between the desires of inclusion advocates and the outcomes of these efforts.

For many, the purpose of engaging users and other non-technical audiences isn’t to use AI/ML technology to emancipate and provide restitution to oppressed communities. In most cases, the intent of incorporating participatory practices is more modest: to expand the circle of people who can use and benefit from a product or service and to avoid some of the more obvious harms related to algorithmic bias. Nevertheless, if we are to genuinely make space for those without explicit expertise in machine learning and AI to contribute to the overall success of AI, we must reconsider our foundational assumptions.

To make changes needed for a more inclusive AI, the field must first agree on some foundational premises regarding inclusion. Below are four guiding principles that practitioners should adopt as their operating assumptions to align their approach to engagement with ethical best practices. These principles build upon the work of many thought leaders in the fields of Indigenous AILewis, J. E., Abdilla, A., Arista, N., Baker, K., Benesiinaabandan, S., Brown, M., … and Whaanga, H. (2020). Indigenous protocol and artificial intelligence position paper. Indigenous AI. https://www.indigenous-ai.net/position-paper, feminist HCICostanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press., crip technoscienceCrip technoscience refers to the field of practice and scholarship outlined by Hamraie and Fritsch which centers disabled people as ”experts and designers of everyday life” and harnesses technoscience for political action. ”Crip” serves as a reclamation of a derogatory term used against people with disabilities to describe the ”non-compliant, anti-assimilationist position that disability is a desirable part of the world.” ”Technoscience” refers to the ”co-production of science, technology, and political life.”Hamraie, A., and Fritsch, K. (2019). “Crip technoscience manifesto.” Catalyst: Feminism, Theory, Technoscience, 5(1), 1-33. https://catalystjournal.org/index.php/catalyst/article/view/29607, data justiceTaylor, L. (2017). “What is data justice? The case for connecting digital rights and freedoms globally.” Big Data and Society, 4(2), 2053951717736335. https://doi.org/10.1177/2053951717736335, and critical race theoryBenjamin, Ruha. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity. https://www.ruhabenjamin.com/race-after-technologyHanna, A., Denton, E., Smart, A., and Smith-Loud, J. (2020). “Towards a critical race methodology in algorithmic fairness.” In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 501-512). https://arxiv.org/abs/1912.03593 who have far more substantial publications discussing the importance of these dimensions.


Principle 1: All Participation Is a Form of Labor That Should Be Recognized


Principle 1

All Participation Is a Form of Labor That Should Be Recognized

To ensure that the material benefits of AI/ML systems are experienced by all, we must first recognize that the value of these technologies is dependent on users and public participation.

Given the need for large amounts of data to build algorithmic systems, AI/ML technology already relies heavily on the participation of the public. As such, it is necessary to define “participation” as any direct or indirect contribution to the creation, development, deployment, and sustainment of an AI/ML system.

By recognizing all participation as work, active consent (where users are asked for consent, given the ability to opt out, and offered compensation if they choose to participate)Sloane, M., Moss, E., Awomolo, O. and Forlano, L. (2020). ”Participation is not a design fix for machine learning.” arXiv. https://arxiv.org/abs/2007.02423 becomes necessary. This also empowers participants to withdraw from projects or AI/ML systems they might find harmful or otherwise unappealing.Cifor, M., Garcia, P., Cowan, T.L., Rault, J., Sutherland, T., Chan, A., Rode, J., Hoffmann, A.L., Salehi, N. and Nakamura, L. (2019). “Feminist Data Manifest-No.” Feminist Data Manifest-No. Retrieved October 1, 2020 from https://www.manifestno.com/home Differences between the passivity, purpose, and expertise required for this participation can help determine what is suitable recognition and compensation.


Principle 2: Stakeholder Engagement Must Address Inherent Power Asymmetries


Principle 2

Stakeholder Engagement Must Address Inherent Power Asymmetries

Many members of the public, especially those who are members of marginalized and historically exploited communities, are wary of contributing to participatory efforts led by companies or other entities.Harrington, C., Erete, S. and Piper, A.M. (2019). “Deconstructing Community-Based Collaborative Design: Towards More Equitable Participatory Design Engagements.” In Proceedings of the ACM on Human-Computer Interaction 3(CSCW):1–25. https://doi.org/10.1145/3359318Freimuth V.S., Quinn, S.C., Thomas, S.B., Cole, G., Zook, E and Duncan, T. (2001). “African Americans’ Views on Research and the Tuskegee Syphilis Study.” Social Science and Medicine 52(5):797–808. https://doi.org/10.1016/S0277-9536(00)00178-7George, S., Duran, N. and Norris, K. (2014). “A Systematic Review of Barriers and Facilitators to Minority Research Participation Among African Americans, Latinos, Asian Americans, and Pacific Islanders.” American Journal of Public Health 104(2):e16–31. https://doi.org/10.2105/AJPH.2013.301706 Historic and contemporary experiences of giving valuable data to others who are able to profit from them creates an environment of mistrust. Historically oppressed communities, such as the Black community in the US, are often very familiar with the use of their bodies and labor for the financial profit of others, especially the dominant class of White Americans.

The relationship between developer and user is often presented as neutral, but several factors often position users and the public in a subordinate position than AI/ML developers and researchers.Barabas, C., Doyle, C., Rubinovitz, J.B., and Dinakar, K. (2020). “Studying Up: Reorienting the Study of Algorithmic Fairness around Issues of Power.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 167-176).Harrington, C., Erete, S. and Piper, A.M.. (2019). “Deconstructing Community-Based Collaborative Design: Towards More Equitable Participatory Design Engagements.” In Proceedings of the ACM on Human-Computer Interaction 3(CSCW):1–25. https://dl.acm.org/doi/10.1145/3359318. Users and members of the public rarely have the power to access proprietary information or drive major decisions at AI/ML organizations. Access to information and decision-making authority is mediated by developers (who may not have the final say in granting that access themselves), with limited recourse to obtain it.

Furthermore, structural power asymmetries exist in these relationships due to histories of colonization, discrimination, and other forms of social, economic, and political exclusion. These structural inequalities, which privilege and empower some over others, contribute to a sense of apprehension to engage in participatory processes. This gives community members reasons to doubt their opinions will be taken seriously and their participation will result in meaningful impact. Even the best practitioners will have to grapple with the repercussions of bad faith, actors engaged in extractive practices, and the structural dynamics of inequality.

Thus, even if interpersonal relations are established on more equitable footing, societal dynamics such as anti-Black racism or misogyny may (and likely will) affect the ongoing relationship. Since structural inequality is intersectional, it is also not enough to find parity across one specific dimension of difference, such as race or gender.

Recognizing these dynamics — and putting policies and practices into place to mitigate differences — is integral to establishing respectful and mutually beneficial relationships between developers and community members.Chan, A., Okolo, C. T., Terner, Z., and Wang, A. (2021). “The Limits of Global Inclusion in AI Development.” arXiv. https://arxiv.org/abs/2102.01265 It paves the way for the shift from AI/ML practitioners “building for” to “building with” users and the public.Sanders, E. B. N. (2002). “From user-centered to participatory design approaches.” In Design and the social sciences (pp. 18-25). CRC Press. https://www.taylorfrancis.com/chapters/edit/10.1201/9780203301302-8/user-centered-participatory-design-approaches-elizabeth-sanders It is also necessary as a means to avoid deepening existing harm and inequality through participatory engagement.


Principle 3: Inclusion and Participation Can Be Integrated Across All Stages of the Development Lifecycle


Principle 3

Inclusion and Participation Can Be Integrated Across All Stages of the Development Lifecycle

While significant strides are being made,Leslie, D., Katell, M., Aitken, M., Singh, J., Briggs, M., Powell, R., … and Burr, C. (2022). “Data Justice in Practice: A Guide for Developers.” arXiv. https://arxiv.org/ftp/arxiv/papers/2205/2205.01037.pdf it remains all too common to see participatory practices implemented at the end of the AI/ML development lifecycle instead of being fully integrated throughout the process.Zdanowska, S., and Taylor, A. S. (2022). “A study of UX practitioners roles in designing real-world, enterprise ML systems.” In CHI Conference on Human Factors in Computing Systems (pp. 1-15). https://dl.acm.org/doi/abs/10.1145/3491102.3517607

Often this is done by engaging UX researchers, who are far more likely to be trained to identify possible end-users and think about who might be directly and indirectly impacted by the deployment of the AI/ML system. UX researchers, however, may not have detailed knowledge of the specifics of the algorithmic model or be given the ability to improve it.

As many equity and inclusion advocates have pointed out, every stage of the development process has the potential to be shaped and directed by users and impacted community members. The deepest and longest-term inclusive participatory practices will establish relationships with community stakeholders and give them the space to direct the purpose and intention of an AI/ML project. While most practitioners, especially those working on commercial products and services, are less likely to engage in co-development practices such as these, practitioners can be mindful of when input is not being gathered and address the issues that might arise due to that absence.


Principle 4: Inclusion and Participation Must Be Integrated to the Application of Other Responsible AI Principles


Principle 4

Inclusion and Participation Must Be Integrated to the Application of Other Responsible AI Principles

Responsible AI frameworks often discuss foundational principles individually, rather than as integrated values which intersect and build upon each other. The capacity to access and benefit from the development and use of the technology should be considered in conjunction with other responsible AI principles.Leslie, D., Katell, M., Aitken, M., Singh, J., Briggs, M., Powell, R., … and Burr, C. (2022). “Data Justice in Practice: A Guide for Developers.” arXiv. https://arxiv.org/ftp/arxiv/papers/2205/2205.01037.pdf

In addition to inclusion, these principles commonly include transparency, accountability, security/privacy, reliability, and fairness. Transparency of algorithmic models, for instance, cannot support responsible development if it is practiced through documentation that is incomprehensible for non-technical audiences or there are no mechanisms for holding developers accountable for harms done to different communities.