Recommendations for Ethical Engagement in Practice

Recommendations for Ethical Engagement in Practice

To center inclusion as a guiding principle in the development of AI/ML technology, practitioners must overcome several challenges. These challenges include the need for organizational support throughout the development lifecycle, grappling with histories of stakeholder exclusion, and an incomplete understanding by many of inclusion’s importance.

Too often, practitioners we interviewed who were working in organizations where user or public input was not as highly prioritized reported having to be creative with building in the opportunity for feedback outside of the project team. Some expressed frustration with the additional time spent trying to convince project managers and other decision-makers within their organizations about the value of public user or stakeholder input and AI ethics more broadly.

In line with the principles offered above, the three recommendations below speak to the challenges facing individual practitioners. To address deeply rooted and broadly distributed challenges, these recommendations require buy-in, not only from the practitioners themselves, but the organizations and teams they work within.

Recommendation 1: Allocate Time and Resources to Promote Inclusive Development

Recommendation 1

Allocate Time and Resources to Promote Inclusive Development

The majority of interviewees in PAI’s study of practitioners noted that their interest in inclusive participatory practices emerged from a personal commitment to social equity. However, these personal ambitions were only impactful when they were girded by material institutional support — something often lacking in for-profit companies.

Managing relationships with user and community-based contributors, strategizing different ways to engage diverse audiences, and synthesizing feedback across different stages of the development lifecycle requires a different set of skills, as well as time and resources to properly execute, than what is necessary for developing algorithmic models.

While technical team members should be part of inclusive participatory practices, teams and organizations should not displace the responsibility of inclusive participatory practices to individuals without broader organizational support. Beyond a stated commitment to responsible and inclusive development, committing organizational resources demonstrates a different level of dedication to pursuing inclusively and responsibly developed technology.

  • Build teams with explicit roles to support community-based relationships and focus on inclusive development, as well as other responsible AI practices.
  • Draw from expertises outside of computer science or machine learning, such as anthropology, community organizing, disability studies, ethnic studies, gender studies, humanities, and sociology.
  • Plan for sprint cycles that permit time for the collection of insights from users and/or impacted communities, as well as the synthesis of those findings for incorporation.
If teams are unable to act on the input of community stakeholders, additional resources to conduct public engagement and more diverse, multidisciplinary staff members may not substantially shift how the work is completed or grow an organization’s capacity to mitigate future harms. In the absence of having community members directly involved in decision-making and direction-setting, those on staff who serve as liaisons with the public should be empowered and have the authority to act in the interests of the community. Without this, there is a high likelihood any public engagement activities will be read as “participant washing,”“Participant washing” refers to the way minimal public or user engagement is spun and exaggerated to present a company or organization as being more inclusive and civic-minded than they actually are.’ as public input will be perceived to have little to no impact on the final product or service.

Recommendation 2: Adopt Inclusive Development Strategies Before Development Begins

Recommendation 2

Adopt Inclusive Development Strategies Before Development Begins

Much like AI ethics should not be treated as an afterthought,Saulnier, L., Karamcheti, S., Laurençon, H., Tronchon, L., Wang, T., Sanh, V., Singh, A., Pistilli, G., Luccioni, S., Jernite, Y., Mitchell, M. and Kiela, D. (2022). “Putting Ethical Principles at the Core of the Research Lifecycle.” Hugging Face Blog. Retrieved from inclusive participation strategies should not be created after much of the AI/ML development lifecycle has passed. Having early discussions about inclusive development and participation goals for the project at the very beginning not only situates values of diversity, equity, and inclusion at the core of the project, but provides an opportunity to initiate important conversations that more broadly relate to the responsible development of AI.Ada Lovelace Institute. (2021). “Participatory data stewardship: A framework for involving people in the use of data.” Ada Lovelace Institute.

While an organization may consider drafting more general guidelines to help each team identify their own approach, given the high degree of variability between projects, it may be more effective to develop project-specific inclusion strategies alongside project work plans. It is important to align the participatory objective with the appropriate mechanisms,Delgado, F., Yang, S., Madaio, M., and Yang, Q. (2021). “Stakeholder Participation in AI: Beyond ‘Add Diverse Stakeholders and Stir.’” arXiv. given the social context in which it is being developed.Sloane, M., Moss, E., Awomolo, O. and Forlano, L. (2020). ”Participation is not a design fix for machine learning.” arXiv.

  • Identify marginalized stakeholders. Who are the people who may use or be impacted by the use of the AI/ML product or service, but are not typically consulted?
  • Understand dynamics of power. What are the power dynamics between the organization (developers) and members of the public (users / impacted communities), both specifically (interpersonal) and structurally (societal)?
  • Identify resources needed. What is needed to build and sustain relationships with key (marginalized) stakeholders throughout / at different points of the AI/ML development lifecycle?
  • Identify integration points. At what stage(s) of development should key stakeholders be engaged? Can stakeholders change these integration points?
  • Recognize contributions of participants. What is the compensation policy for stakeholders who participate in the development of the AI/ML product or service? How are passive participants (e.g., people who contribute important data points for training dataset) compensated? How will they be credited? Are there opportunities to redistribute future success with participants (e.g., profit sharing)? Can participants withdraw their contributions or support (including any data taken)?
  • Build accountability mechanisms. What processes or mechanisms exist for participants or future users/members of the public to hold the organization or company accountable for any harm experienced due to use of the algorithmic model?
Drafting the best set of guidelines and policies for an organization or project will likely be an iterative process requiring resources that smaller organizations or leaner teams may not have. Currently, there are no “off-the-shelf” guidelines or best practices organizations and practitioners can draw upon to support their efforts and ensure that some guidance and direction is provided (rather than none at all). Additionally, having a plan does not mean that all team members will understand or implement it. Having incentives to encourage adoption is key.

Recommendation 3: Train Towards an Integrated Understanding of Ethics

Recommendation 3

Train Towards an Integrated Understanding of Ethics

The relevance and value of incorporating inclusive practices into AI/ML development may not be readily apparent to some practitioners. This is an opportunity to not only discuss the important role of users and impacted communities, but more generally about the need for the responsible and ethical development of AI.

When PAI put out an open call for interviewees who self-identified as incorporating participatory practices into AI/ML systems, individuals in a wide range of roles (from engineers to UX researchers to change management consultants) responded. Even among practitioners working at the same organization, there can be substantial differences in both their knowledge about how AI/ML systems were created or will be used and their ability to incorporate inclusive practices.

Creating a body of practitioners who are conversant on both equity and responsible AI issues will significantly help shift principles into practice by enabling more robust and thoughtful colleagues who share common definitions and understandings.

  • Develop and implement trainings and regular workshops on responsible AI principles and best practices, including inclusive practices for all staff members, that cover:
    • How “inclusion” works with and alongside other principles of responsible AI
    • The aims and implications of various participatory frameworks and approaches to understand that participation is not a “one-size fits-all” or singular concept
As with any non-mandatory employee training, or supplemental professional development, those who are disinclined to engage in inclusive or responsible AI practices cannot be forced to learn and engage. Having leadership throughout the organization, including senior leadership, prioritize and highlight the value of these trainings (and the practices themselves) as being core to the success of the organization’s work can go a long way to improve adoption and commitment among employees. Also, many AI/ML practitioners operate outside of formal organizational spaces. Without structured professional development in place through a workplace, it is important to provide both free and paid learning opportunities for independent or start-up practitioners.