Supporting Corporate Responsibilities with Emerging AI Technologies: Insights From the PAI Board
Private sector companies are developing, designing, and deploying AI systems with an ever-growing number of applications in an ever-expanding variety of domains. As AI enters these largely unregulated spaces, the industry must align its business interests with social responsibility. Here, leading technology companies will have a critical role to play by researching ethics in AI, advancing understanding internally, and deploying responsible, equitable and accountable AI systems in real-world settings.
At an afternoon session just prior to the Partnership on AI (PAI)’s June 2022 Board Meeting, we invited Directors to share and discuss their experiences setting up and sustaining ethical AI teams. The Board includes companies with very different business models and they focus on diverse deployment scenarios such as recommender systems, devices, and enterprise AI. This session offered an opportunity for them to discuss how they prioritize and focus on different (though often overlapping) issues, and adopt different AI ethics governance, policy, and process solutions in their operations.
Each experience created an opportunity for participants to discuss how to build teams with the influence and power to drive positive change within their organizations. The goals of this session were to share learnings, discuss challenges, and identify connections and insights relevant to current and potential PAI programs.
The following is a summary of the insights and critical questions presented by Board members representing Partners in the AI industry. We offer them for consideration by others as they develop systems to incorporate ethical AI into their organizations.
Insights and Critical Questions
The individuals presenting identified some common beliefs in how to strengthen ethical AI research. Most critically, each presentation acknowledged that most issues that arise related to ethical approaches to AI require multi-stakeholder involvement from groups with diverse expertise.
In particular, discussion focused on how large systems are complex and include inputs from multiple stakeholders through various interactions with differing degrees of system oversight. Increasingly, no one person knows exactly what is going on with a whole system. Therefore, it is difficult to address the emergent effects of AI because individual decisions often do not fully explain system effects.
These complexities and others resulted in possible gaps between endorsing AI ethics and fully operationalizing AI ethics. Taken together, these starting premises mean that a focus on organizational structure, process, and policy is essential for ethical AI to be implemented.
When integrating an ethical AI process into existing systems, there are also a number of key questions to consider, including:
- What is new about the problems in this space? For example, is AI introducing a new issue?
- What existing structures and specific tools exist in the organization to support ethical AI?
- Where does ethical AI fit into the current product development lifecycle?
- How are internal value tensions (e.g., profit motives vs. fairness) resolved?
- How can organizations meaningfully measure progress in ethical AI initiatives?
- Can the organization evolve its ethical AI approach as the technology evolves?
Solutions and Processes
Individuals reported implementing a number of different solutions and processes to researching and developing ethical AI.
Key among them were encouraging internal understanding of the implications of AI by actively engaging engineers and cross-functional teams. Through these engagements, teams could explore issues that require response and correction and motivate problem-solving approaches. Cutting across all product teams, such endeavors could improve outcomes and, in turn, offer operational and strategic improvements.
One approach found success with an institutional approach to governance that included decisions embedded in the product development pipeline. Built over time, this approach supported “go” and “no-go” delivery decisions. Instead of a single ethics team, this organization settled on a company-wide ethics review process. This included developer toolkits, playbooks, education for all employees, training for engineers in design thinking and other areas, and other tactics to ensure ethics is a typical part of the development process. An open reporting of sensitive uses led to a decision point whether a project should advance or not. This approach also created a framework to explain to stakeholders and investors why a decision was made such as why a project was discontinued and how that decision benefits society in the long term.
Other key steps were company-wide education (including who is responsible for different outcomes) and the addition of more non-technical actors at decision points within the development process. Such educational efforts make everyone a potential reporter, empowering them to spot and raise awareness of potential issues. A compliance team was also essential to creating a culture where ethical AI is considered throughout the product development lifecycle.
The presenters indicated that there is an ongoing need for additional studies to identify more instances where technologies are offering exclusionary outcomes for customers and stakeholders in order to update and evaluate current measures of success. Relatedly, identifying and dealing with questions related to cultural context is key to having an inclusive approach at the development stage.
While the session identified different approaches within the AI industry for operationalizing ethical AI in practice, themes emerged. They included adopting a multi-disciplinary development approach, ongoing education for all involved with the design and deployment of the AI, solid and sustainable academic and industry partnerships, and a comprehensive and clear review process.
As this brief summary reveals, organizations that attend to the health and influence of an organization’s overall approach to ethical AI must be thoughtful and intentional about how to integrate ethical AI at all stages of the AI lifecycle and all levels of the organization. Insights from this session are relevant across PAI’s portfolio of programs, perhaps most directly in our recommended approach to fair, transparent and accountable documentation as a forcing mechanism to interrogate and challenge biases and assumptions from design to deployment to retirement of an AI system.
The Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles (ABOUT ML) project objective is to work towards a new industry emphasis on transparent machine learning (ML) systems in order to establish new norms of transparency. This work has identified best practices for documenting and characterizing key components and phases throughout the ML system lifecycle from design to deployment, including annotations of data, algorithms, performance, and maintenance requirements. For more information, please go to the ABOUT ML Resources Library.
- Responsible use of artificial intelligence and machine learning website which includes their Responsible AI Guide.
- Prem Natarajan and Michael Kearn’’s presentation at re:Mars 2022 “Frontiers of fair and accessible AI (MLR201-L)”.
- Overview of AI Ethics at IBM
- AI ethics in action
- First IBM paper on trusting AI (2016)
- WEF white paper about the IBM approach to AI ethics (2021)
- IBM-FPF paper on the responsible advancement of neurotechnologies (2021)
- IBM paper on addressing neuroethics issues in practice: Lessons learnt by tech companies in AI ethics (Neuron, 2022)
IBM trustworthy AI toolkits:
- Microsoft’s framework for building AI systems responsibly
- Overview of Responsible AI at Microsoft
- Resources and tools