Recommendations for Ethical Engagement in Practice
Recommendations for Ethical Engagement in Practice
To center inclusion as a guiding principle in the development of AI/ML technology, practitioners must overcome several challenges. These challenges include the need for organizational support throughout the development lifecycle, grappling with histories of stakeholder exclusion, and an incomplete understanding by many of inclusion’s importance.
Too often, practitioners we interviewed who were working in organizations where user or public input was not as highly prioritized reported having to be creative with building in the opportunity for feedback outside of the project team. Some expressed frustration with the additional time spent trying to convince project managers and other decision-makers within their organizations about the value of public user or stakeholder input and AI ethics more broadly.
In line with the principles offered above, the three recommendations below speak to the challenges facing individual practitioners. To address deeply rooted and broadly distributed challenges, these recommendations require buy-in, not only from the practitioners themselves, but the organizations and teams they work within.
Recommendation 1: Allocate Time and Resources to Promote Inclusive Development
Recommendation 1
Allocate Time and Resources to Promote Inclusive Development
The majority of interviewees in PAI’s study of practitioners noted that their interest in inclusive participatory practices emerged from a personal commitment to social equity. However, these personal ambitions were only impactful when they were girded by material institutional support — something often lacking in for-profit companies.
Managing relationships with user and community-based contributors, strategizing different ways to engage diverse audiences, and synthesizing feedback across different stages of the development lifecycle requires a different set of skills, as well as time and resources to properly execute, than what is necessary for developing algorithmic models.
While technical team members should be part of inclusive participatory practices, teams and organizations should not displace the responsibility of inclusive participatory practices to individuals without broader organizational support. Beyond a stated commitment to responsible and inclusive development, committing organizational resources demonstrates a different level of dedication to pursuing inclusively and responsibly developed technology.
- Build teams with explicit roles to support community-based relationships and focus on inclusive development, as well as other responsible AI practices.
- Draw from expertises outside of computer science or machine learning, such as anthropology, community organizing, disability studies, ethnic studies, gender studies, humanities, and sociology.
- Plan for sprint cycles that permit time for the collection of insights from users and/or impacted communities, as well as the synthesis of those findings for incorporation.
Recommendation 2: Adopt Inclusive Development Strategies Before Development Begins
Recommendation 2
Adopt Inclusive Development Strategies Before Development Begins
Much like AI ethics should not be treated as an afterthought,Saulnier, L., Karamcheti, S., Laurençon, H., Tronchon, L., Wang, T., Sanh, V., Singh, A., Pistilli, G., Luccioni, S., Jernite, Y., Mitchell, M. and Kiela, D. (2022). “Putting Ethical Principles at the Core of the Research Lifecycle.” Hugging Face Blog. Retrieved from https://huggingface.co/blog/ethical-charter-multimodal inclusive participation strategies should not be created after much of the AI/ML development lifecycle has passed. Having early discussions about inclusive development and participation goals for the project at the very beginning not only situates values of diversity, equity, and inclusion at the core of the project, but provides an opportunity to initiate important conversations that more broadly relate to the responsible development of AI.Ada Lovelace Institute. (2021). “Participatory data stewardship: A framework for involving people in the use of data.” Ada Lovelace Institute. https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/
While an organization may consider drafting more general guidelines to help each team identify their own approach, given the high degree of variability between projects, it may be more effective to develop project-specific inclusion strategies alongside project work plans. It is important to align the participatory objective with the appropriate mechanisms,Delgado, F., Yang, S., Madaio, M., and Yang, Q. (2021). “Stakeholder Participation in AI: Beyond ‘Add Diverse Stakeholders and Stir.’” arXiv. https://arxiv.org/pdf/2111.01122.pdf given the social context in which it is being developed.Sloane, M., Moss, E., Awomolo, O. and Forlano, L. (2020). ”Participation is not a design fix for machine learning.” arXiv. https://arxiv.org/abs/2007.02423
- Identify marginalized stakeholders. Who are the people who may use or be impacted by the use of the AI/ML product or service, but are not typically consulted?
- Understand dynamics of power. What are the power dynamics between the organization (developers) and members of the public (users / impacted communities), both specifically (interpersonal) and structurally (societal)?
- Identify resources needed. What is needed to build and sustain relationships with key (marginalized) stakeholders throughout / at different points of the AI/ML development lifecycle?
- Identify integration points. At what stage(s) of development should key stakeholders be engaged? Can stakeholders change these integration points?
- Recognize contributions of participants. What is the compensation policy for stakeholders who participate in the development of the AI/ML product or service? How are passive participants (e.g., people who contribute important data points for training dataset) compensated? How will they be credited? Are there opportunities to redistribute future success with participants (e.g., profit sharing)? Can participants withdraw their contributions or support (including any data taken)?
- Build accountability mechanisms. What processes or mechanisms exist for participants or future users/members of the public to hold the organization or company accountable for any harm experienced due to use of the algorithmic model?
Recommendation 3: Train Towards an Integrated Understanding of Ethics
Recommendation 3
Train Towards an Integrated Understanding of Ethics
The relevance and value of incorporating inclusive practices into AI/ML development may not be readily apparent to some practitioners. This is an opportunity to not only discuss the important role of users and impacted communities, but more generally about the need for the responsible and ethical development of AI.
When PAI put out an open call for interviewees who self-identified as incorporating participatory practices into AI/ML systems, individuals in a wide range of roles (from engineers to UX researchers to change management consultants) responded. Even among practitioners working at the same organization, there can be substantial differences in both their knowledge about how AI/ML systems were created or will be used and their ability to incorporate inclusive practices.
Creating a body of practitioners who are conversant on both equity and responsible AI issues will significantly help shift principles into practice by enabling more robust and thoughtful colleagues who share common definitions and understandings.
- Develop and implement trainings and regular workshops on responsible AI principles and best practices, including inclusive practices for all staff members, that cover:
- How “inclusion” works with and alongside other principles of responsible AI
- The aims and implications of various participatory frameworks and approaches to understand that participation is not a “one-size fits-all” or singular concept