How Should Human Rights Advocacy Balance the Opportunities & Risks of Artificial Intelligence?
How Should Human Rights Advocacy Balance the Opportunities & Risks of Artificial Intelligence?
How might artificial intelligence (AI) technology be best leveraged for human rights causes while also mitigating the risks? Are resource-constrained human rights organizations able to tap into this technology, and are there particular applications where AI will create the most impact? PAI Partner, Intel – along with other distinguished voices from Arizona State University, Numina, and Benetech – explored this set of questions in a PAI convened panel at the recent AAAS Science, Technology and Human Rights Conference.
The conference marked the 10th anniversary of the AAAS Science and Human Rights Coalition and brought together 200 attendees in Washington DC. Attendees included scientists and engineers, human rights experts, and members of impacted communities from around the world who are applying scientific evidence, methods, and tools to protect human rights. The PAI panel was moderated by Samir Goswami, PAI’s Chief Operating Officer, and was live-streamed to more than 850 people.
With a multiperspective look at AI for human rights causes, some practical examples and suggestions emerged through the course of the conversation.
Risks
According to the panelists, violation of privacy is among the most salient human rights risks that data and AI systems pose. Some companies tackle privacy challenges on a reactive basis. However, some, like Numina incorporate privacy protection into the design of their products, rather than treating it as an afterthought. For example, Numina is a New-York based analytics company that delivers real-time insights from street level data collection through its sensors. The data collected informs urban planners and policy makers; and, all of the data collected from their cameras and sensors are anonymized. Furthermore, all data is held on location and not uploaded into a central server. Jennifer Ding, Solutions Engineer at Numina said they “view unnecessary data as a liability, because it poses a privacy risk.”
Numina is explicit about their privacy philosophy and policies; they summarize their approach as Intelligence without Surveillance. Even with the lack of an overarching privacy law in the US, Ding stated that there are market-based incentives for such privacy-forward systems. According to Ding, these systems “will profit in the long run, because cities trust them more.” Other incentives include rising costs of streaming and storing continuous video data, as well as regulations enforced by an increasing amount of cities on certain kinds of algorithms, such as San Francisco’s ban on facial recognition technology, pushing companies to be more cognizant of privacy.
Data ownership, responsibility and security issues were brought up by the panel as another human rights concern. Shabnam Mojtahedi, Senior Program Manager at Benetech pointed out that especially in conflict affected areas, these issues are highly problematic. Some of Benetech’s tools help human rights defenders uncover evidence of war crimes by using images and videos from regions of conflict. Motjahedi explained why data responsibility is so problematic in this context: “There is no mechanism to get the informed consent of the victims who appear in videos posted on social media; the idea of opting out does not exist.” As such, Benetech builds their systems with digital security concerns at the forefront: “What we do is ensure that specific people or information about people cannot be identified through the models we create, and we will not store or save any videos we process.” Motjahedi stressed the need to develop best practices and policies in this area by getting more input from on the ground partners, users, and experts.
Opportunities
While AI offers tremendous potential to benefit human rights causes, practitioners should be cautious and realistic about applying AI solutions, especially in the high-stakes context of human rights. The panel had strong examples of use cases where AI technologies benefit human rights issues, in a way that is also thoughtful about potential risks.
Dr. Nadya Bliss, Executive Director of the Global Security Initiative at Arizona State University, talked about the work ASU Professor Dominique Roe-Sepowitz is doing to combat human trafficking. Her center is collecting and analyzing anonymized data from disparate sources of information on human trafficking crimes to identify trends and characteristics of perpetrators. AI could be particularly helpful in this context to identify actors and pathways, and enable predictive decision making.
Another use case that demonstrates the social impact of machine learning technologies comes from Benetech. Benetech is working with human rights groups to help turn conflict data into actionable evidence to promote justice. The deduplication tool they are building helps investigators gain insights from the million videos collected from Syria since the start of the conflict. Mojtahedi explains that data security is very critical for human rights groups. The technology they’re building allows human rights organizations to collaborate and understand the broader data landscape without compromising their confidentiality and security protocols.
AI enabled systems help generate important insights that can benefit a variety of human rights causes. Samir Goswami, COO at the Partnership on AI, stressed the importance of acting upon these insights, which is “not a challenge of the technology, but a matter of political will and prioritization”, that is, while AI can help us understand a situation better and pinpoint a specific problem area, we still have to act.
Balancing Opportunities and Risks
The panel revealed important practices that can help us balance opportunities and risks when using AI technologies for human rights causes:
1. Applying the right technology to the right problem. When thinking about the potential for AI systems to benefit human rights, it’s critical to understand that technology can not be the solution to any human rights problem. With that in mind, it’s important to identify “mature technologies” for which there are governance structures in place and that can be leveraged for specific use cases.
2. Ensuring diversity. Tackling bias in AI systems is critical from a human rights perspective. Especially when applied to diverse populations, current AI systems are frequently biased against underrepresented and vulnerable groups. Failing to account for this bears tremendous human rights risks
Underlining the importance of diversity, the panel explained how diversity efforts are helping their organizations do better. Chloe Autio, Privacy Manager at Intel said “It’s not only important to have diversity because diversity is important and the right thing to do. Diversity helps us build better products.” She reinforced that the inclusion of diverse backgrounds and experiences in every stage of a project or product development leads to the design and development of more innovative, versatile technology. Autio explained how multi-disciplinary AI teams at Intel are helping make the technology less biased. The panel reminded us that diversity is something that the technology industry needs to continue to reckon with.
3. Engaging with stakeholders early and continuously. The panel stressed the importance of engaging stakeholders early and continuously, to have a better understanding of needs and risks. Both Benetech and Arizona State University mentioned above, are paying close attention to integrating this into their processes. To mitigate the risks that technology tools might create, they are engaging closely with domain experts and communities such as survivor alliances or human rights defenders. Being mindful of resource and digital literacy challenges in the human rights sector were also stressed as a need by the panelists.
4. Designing for vulnerability vs capability. The panelists talked about the importance of having a vulnerability-centric approach rather than a capability-centric approach when developing technologies. The criteria when developing or investing in new technologies should not only be whether it is technically possible, but rather, what use cases this technology can enable, or which populations might be positively affected by it. According to Dr. Bliss, “Understanding the specific vulnerabilities of the domain is absolutely necessary at all stages of the development of advanced technology.”
In explaining how they identify the right use cases at Benetech, Motjahedi said they focus on the need, as well as the potential risks: “We try to identify where we can have the most impact and minimize the risks. If we come to the conclusion that we can’t deploy a technology ethically, then the answer is no.” This was a great reminder that just because we can, doesn’t mean we should.
At the Partnership, we agree with these practices and look to bring them forward in our work. We principally agree with the need to focus on diversity in the cultivation of best practices for artificial intelligence. It is why we are creating a fellowship on diversity and inclusion.
What risks and opportunities do you see in using artificial intelligence for human rights causes? Give the Partnership a tweet (@PartnershipAI) with your thoughts, along with the fellow speakers of the panel: @ChloeAutio,@nadyabliss, @jen_gineered, and @SMojtahedi.