Conclusion
Conclusion
The growing diversity of actors and circumstances involved in AI/ML deployment makes establishing a set of ethical participatory practices especially difficult for this field. Even among practitioners working at the same organization, there can be substantial differences in both their knowledge about how AI/ML systems were created or will be used and their ability to incorporate inclusive practices. Additionally, the greater availability of AI development platforms, including “no-code” platforms, means that algorithmic models can be deployed without having deep expertise, expanding the number of circumstances automated systems are deployed in. This means many more instances where automated systems are being deployed without consideration of how the algorithms were developed, the provenance of the datasets and nature of the bias on which they were trained and tested, and the ethical implications of their development and deployment.
When AI/ML projects have small development teams, quickly approaching deadlines, and limited budgets, it can be easy to deprioritize the inclusion of users and other stakeholders. End-users, impacted non-users, and the public, however, are integral to the ethical development of AI/ML systems. Engaging them can identify issues which can be better resolved through algorithmic systems, lead to products and services that are useful and accessible by many, and formulate policies for creating and deploying AI.
On their own, technical understandings of (and solutions to) ethical issues related to AI/ML-enabled systems are insufficient. To both meaningfully and ethically implement stakeholder engagement practices, it is necessary to draw together understandings of structures of power and social inequality and apply them to the development and deployment of digital technology. Ignoring asymmetries of power can result in greater harm between those who develop AI/ML technology and those who are impacted by it. Given the long history of marginalized communities being asked to freely give their time and labor, extractive stakeholder engagements have the potential to deepen the social inequality many ethically oriented practitioners are trying to mitigate.
While AI/ML practitioners should not be expected to be both proficient developers and experts in social inequality, they do need to have shared language and concepts with those who are. To develop truly inclusive AI-ML technology, practitioners need additional resources, including training to build expertise, funding to support community engagement, and time to incorporate stakeholders feedback. In addition to an alternative framework to understand how to develop technology more responsibly and inclusively, support structures are needed to advance these efforts. Individual advocates cannot be expected to instigate the change necessary in the field: organizations, and the field more broadly, must integrate deep changes to how work is conducted if we are to address the social inequality within our AI/ML products and systems.