Our Blog

Partnership on AI Assembles Global Community To Build a Healthy Information Ecosystem

In 2020, we witnessed how misleading and harmful online content presents challenges for democracy, public health, and even well-being. While not a panacea for such challenges, AI techniques can help ensure high-quality public discourse online, whether through triaging mis/disinformation for human reviewrecommending credible content, or even providing opportunities for creative expression. Ensuring that AI is deployed properly and responsibly requires key stakeholders’ participation in the global information ecosystem.

Today, the Partnership on AI announced that it is welcoming new Partners from civil society, academia, and industry to its AI and Media Integrity Program. These global Partners are devoted to mitigating misleading and harmful content online, and helping define what content we should bolster, and how. Adobe, Code for Africa, Duke Reporters’ Lab, Full Fact, and Meedan join as members of the Partnership on AI to support a healthy information ecosystem in the AI age.

Adobe joins the Partnership as it advances the work of the Content Authenticity Initiative (“CAI”). The CAI is building a system to provide provenance and history for digital media, giving creators a tool to claim authorship and empowering consumers to evaluate whether what they are seeing is trustworthy.

“Adobe’s heritage is built on providing trustworthy and innovative digital solutions and the values and mission of the Partnership on AI align perfectly with our commitment to Digital Citizenship. Through Adobe’s work with the CAI and on AI Ethics, we are focused on the responsible use of technology for the good of creatives, our customers and society. We are proud to join the Partnership on AI to further its collaborative, cross-industry approach to address the challenges of misleading and harmful content online.”


Dana Rao, Executive Vice President, General Counsel and Corporate Secretary at Adobe

Code for Africa (CfA), the continent’s largest network of civic tech and open data labs, is particularly sensitive to the unintended consequences when western information strategies are applied in the global south. CfA’s perspective will help the Partnership understand how we communicate this work to African audiences across borders, and ensure that policy is shaped with a more inclusive participation of a continent often overlooked in decision-making.

“Context matters. Machine learning is reshaping the way that authorities make decisions, impacting millions of lives. It also shapes how human rights defenders fight online hate and abuse, and even what is considered to be truth. But, in Africa, many of these decisions are based on data and algorithms that have no relevance to our reality. This new coalition will help change this.”

Justin Arenstein, CfA CEO

The Duke Reporters’ Lab, a center for journalism research and an incubator of new technologies, has a core focus in fact-checking that will deepen the Partnership’s expertise in using artificial intelligence to help journalists do their The Reporters’ Lab will provide insights from its efforts to create tools that allow platforms and independent fact-checkers to communicate rapidly and efficiently.


“The world’s fact-checkers are providing a vital service in the effort to combat the avalanche of misinformation, but too often their work is not used at scale. Automated systems allow fact-checking to be applied more quickly and even-handedly across multiple platforms, and working with the Partnership will help to ensure those systems are handled with care.”

Bill Adair, Lab Director

Full Fact is developing world-leading technology and research to spot repeated claims and how bad information can be tackled at a global scale. Full Fact’s team of independent fact checkers and campaigners will help the Partnership understand how fact-checking will integrate with platforms and industry.

“Bad information ruins lives. We need to tackle it fast to protect people, and that means equipping fact checkers and others around the world with the right technology to track and challenge harmful claims as they emerge Full Fact’s claim detection tools have shown how innovative use of AI can help slow the spread of misinformation, lending crucial support to our team of fact checkers throughout the pandemic. This partnership will help reach even more people with the right information when it matters most,” said Andy Dudfield, Head of Automated Fact Checking at Full Fact. “Fact checking is difficult and the stakes have never been higher. Toggether we can push for accountability, so that technology and information systems that impact public debate are kept open to public scrutiny.”

Andy Dudfield, Full Fact Head of Automated Fact Checking

Meedan is a technology non-profit that builds software and initiatives to strengthen global journalism, digital literacy, and accessibility of information for the world. Turning to Meedan’s experience in building programs with newsrooms, civil society, NGOs, and academic institutions, Meedan will help advance information integrity efforts with the Partnership.

“Tackling misinformation and other harmful content at scale requires a thoughtful combination of human and machine intelligence, social and computational science, and industry and academic expertise. We are excited to help bridge these divides and work with the Partnership to understand how approaches, data, and knowledge can be shared securely and ethically.”

Dr Scott A. Hale, Meedan Director of Research

These new members of the Partnership agree: artificial intelligence is here to stay. Yet, the AI age brings with it questions around how to ensure that peoples’ online interactions are grounded in truth. The Partnership’s work on the AI and Media Integrity Program will bring together these voices toward the shared goal of combating the threats to public discourse that AI brings.

“Adobe, Duke Reporters’ Lab, Full Fact, Code for Africa, and Meedan add exceptional, varied expertise in content integrity, fact-checking, and mis/disinformation to the PAI community,” said Claire Leibowicz, who leads the AI and Media Integrity Program at Partnership on AI. “We are additionally gratified to be expanding PAI’s global reach — any meaningful solutions in this space require deep cultural awareness and nuance. We are eager to learn from this cohort.