Our Blog
/
Blog
Policy

Building a Robust AI Assurance Ecosystem: PAI’s Recommendations for the G7 2025 Summit

$hero_image['alt']

This week, G7 Leaders are gathering at the 2025 Summit in Kananaskis, Alberta, from June 15 to 17. As in previous G7 Summits since at least 2018, Innovation, Digital and/or Technology Ministers will meet to discuss how to manage rapidly evolving technologies, including AI. A consistent theme in ministerial declarations from this group is how to make sure that AI is designed and deployed in a trustworthy way.

Building trust in AI is a key aim of AI assurance – the process of measuring, evaluating and communicating the trustworthiness of AI systems. Assurance mechanisms include testing, evaluations, validation and verification (TEVV) as well as audits and certifications. With the appropriate infrastructure in place, these tools can measure different AI attributes – including reliability, safety, security, privacy and respect for other human rights – against relevant standards, giving people the confidence to use AI for a number of beneficial applications. Different inputs feed into AI assurance tools, including transparency and incident reports.

Ahead of the upcoming G7 Summit, Partnership on AI (PAI) sees two clear opportunities for action by G7 members in the field of AI assurance, building on the progress achieved by the Group so far:

  1. working to define and harmonize AI assurance tools, as well as to better understand the roles of different actors, particularly through the development of international standards;
  2. supporting the design and use of AI assurance tools in line with sustainable development needs and goals.

AI assurance has featured prominently in the Hiroshima Process International Guiding Principles and the International Code of Conduct for Organizations Developing Advanced AI Systems, adopted by the G7 leaders in 2023. More recently, with the support of the OECD, the G7 launched a voluntary reporting framework to encourage transparency and accountability among organizations developing or deploying advanced AI systems. Currently, 20 organizations – among which are some of PAI’s partners – have submitted transparency reports. These outline, among other things, the extent to which submitting organizations have used several assurance mechanisms, including evaluations and testing, as well as how they have produced relevant inputs, such as transparency and incident reports.

Gaps and Opportunities for the G7 Ahead

While these are all steps in the right direction, we need continued support for and implementation of AI assurance mechanisms within the G7 and partner institutions. That is why we announced, at the 2025 AI Action Summit in France, the launch of a project that seeks to articulate what a vibrant and effective AI assurance ecosystem looks like domestically and globally. Through this work, critical gaps have become apparent. The next G7 Summit is a unique opportunity to address some of those gaps, keeping the momentum going towards developing a robust global AI assurance ecosystem.

More work needs to be done to define and harmonize different assurance mechanisms as well as to better understand the roles of different actors, as several inconsistencies have emerged across the voluntary reports submitted as part of the Hiroshima AI Process Reporting Framework. The UK’s Department for Science, Innovation & Technology (DSIT) has carried out some foundational work in this area, while the US’s National Institute for Standards and Technology (NIST) is now ramping up some of this work within its AI Standards “Zero Drafts” Pilot Project.

Recently, with the support of the OECD, the G7 launched a voluntary reporting framework to encourage transparency and accountability among organizations developing or deploying advanced AI systems.

But standards cannot be developed alone and the G7 is uniquely placed to support this work. This is why, in their 2024 Declaration, the G7 Industry, Technology and Digital Ministers recognized the continued importance of G7 collaboration on technical standards for digital technologies and reaffirmed their support for international standards development processes based on inclusive multistakeholder engagement. The G7 should continue to support the development of technical and socio-technical international standards for AI, in order to drive forward this assurance alignment and interoperability. This includes standards on AI attributes against which assurance is measured as well as standards for assurance mechanisms themselves.

As noted in our previous research, standards – voluntary guidelines for industry – can be an important tool to foster interoperability of AI governance practices within and across domestic jurisdictions. They can lower compliance burdens and avoid a race to the bottom, promoting AI innovation and adoption while anchoring democratic values within the G7 and beyond. We welcome the new US Center for AI Standards and Innovation (CAISI)’s drive to assist industry in developing standards and conducting AI evaluations and assessments, while cooperating with international allies. We encourage all G7 member States to come together in this endeavor to strengthen the global AI assurance ecosystem, in particular by leveraging and expanding the International Network of AI Safety Institutes.

This work becomes especially important as AI systems become more widespread and have the potential to significantly impact the lives of workers and the economy more generally, as noted in the G7 Labour and Employment Ministerial Declaration and the G7 Action Plan for a human-centered development and use of safe, secure and trustworthy AI in the World of Work. Likewise, given the rise of increasingly autonomous AI systems that are able to take action in physical or virtual environments – such as AI agents – the stakes of using AI become even higher. Assurance is key to test the reliability and other attributes of such systems.

Fostering Sustainable AI Development

Building on the 2024 G7 Industry, Technology and Digital Ministers’ Declaration, PAI also recommends strengthening the G7’s commitment to ensure that AI is developed and deployed for inclusive and sustainable development. We welcome the Italian 2025 Presidency’s efforts in this regard, in particular the establishment, in partnership with the United Nations Development Programme (UNDP), of the AI Hub for Sustainable Development, which focuses on supporting AI industry growth in Africa. But continued efforts need to be made to ensure that AI does not hinder but fosters inclusive and sustainable development around the world. One particular research gap that we are looking to fill is how to develop assurance tools that take into account the needs of Global South developers and deployers while measuring compliance with the Sustainable Development Goals. We believe the G7 can help fill this and other gaps in AI for sustainable development.

In particular, G7 countries – individually or together – should follow the footsteps of the Japanese and Italian presidencies and support AI for sustainable development projects in other regions of the Global South – including Latin America and Southeast Asia. Likewise, we commend the recognition, in the 2024 G7 Industry, Technology and Digital Ministers’ Declaration, of ongoing international discussions on digital public infrastructure (DPI), including in the context of the G20, and its potential to foster government resilience and promote more inclusive and sustainable economic growth. But as the G20 South African presidency gears up to advance action on DPI and AI for the Sustainable Development Goals, the G7 becomes a crucial partner. As with the Current AI Foundation, launched at the 2025 AI Action Summit in France, more G7 countries should join forces with G20 countries to fund the development of AI for the public interest and the necessary DPI. Without DPI, it will be extremely difficult for Global South countries to support AI development and adoption in line with the Sustainable Development Goals, given the scarcity of private investments in digital infrastructure.

Looking Ahead

As G7 leaders prepare to meet this week, we encourage them to take the opportunity to build on this momentum and make tangible progress. To realize a robust AI assurance ecosystem, we must develop international standards, and invest in sustainable development through Digital Public Infrastructure. By focusing on these priorities, the G7, along with the help of multistakeholder partnership organizations such as PAI, can foster the development of safe, equitable, and trustworthy AI for people and society. To stay up to date with our work in this space, sign up for our newsletter.