Our Blog
/

Prioritizing AI Assurance and Civil Society Engagement Following India’s AI Impact Summit

$hero_image['alt']

This week’s AI Impact Summit in New Delhi marked a pivotal moment in global AI governance. As the first international AI summit hosted in the Global South, it signaled that AI’s future cannot be determined solely by a handful of companies and countries in the Global North, but must reflect the needs and values of humanity as a whole.

As AI drives change across economies and societies, policymakers are recognizing the capacity for AI to promote growth and solve pressing challenges. We have seen an increasing policy shift to promote AI adoption, like in the US AI Action Plan, the EU AI Continent Action Plan, and the G7 AI Adoption Roadmap.

This Summit’s emphasis on impact is also a call to move beyond high-level principles and aspirational commitments toward implementation and measurable outcomes, both in expanding beneficial AI adoption and in mitigating risks and harms.

Yet the Summit’s discussions this week revealed two critical priorities for the future of global AI governance:

1. Ensuring meaningful participation by people and civil society in AI governance
2. Strengthening the AI assurance ecosystem

The Summit’s “Inclusion for Social Empowerment” Chakra rightly highlights the need for AI systems that are locally relevant, culturally respectful, and inclusive of the communities they affect. But achieving this goal requires more than inclusive product design and evaluations. It also demands meaningful participation by people and civil society in wider AI governance and assurance at local, national, regional, and global levels.

This starts with the AI summits themselves. Civil society must have a seat not just on panel discussions but at the table where official outcomes, including those relevant to AI assurance, are negotiated and drafted. Without this deeper engagement, we risk perpetuating the very power imbalances these summits aim to redress.

Second, the “Safe and Trusted AI” Chakra focuses on democratizing access to technology-enabled governance tools, empowering all nations with the technical capabilities to govern AI effectively. This is necessary but we need to go further. Democratizing AI trust and safety requires more than narrow technical tools; it demands ecosystem-level reform that accounts for social, institutional, legal, and political realities.

Democratizing AI trust and safety requires more than narrow technical tools; it demands ecosystem-level reform that accounts for social, institutional, legal, and political realities.

To address these gaps and truly bridge the Global AI Divide, global efforts must focus on strengthening the AI assurance ecosystem. This theme resonated across several Summit side events this week, including panels hosted by EY, techUK, OECD and GPAI, JPMorganChase, and Fathom. Crucially, bridging the divide also means addressing persistent inequalities between the Global North and South and between governments and the people affected by AI systems — a priority underscored by PAI in its Summit side event.

Building Trust Through AI Assurance

Across industries, assurance can help to build trust across the value chain and in the end product by communicating how trustworthy product components, processes, and professionals are. As an example, when buying a newly constructed home, trust that the house is safe and livable is the result of a system of assurance. This can include building codes, professional assurance like special licenses for contractors and crane operators, and requirements for ‘construction grade’ materials.

Similarly, AI assurance is the process of assessing and communicating whether a certain AI model, system, or any of its components is trustworthy, across dimensions such as performance, safety, security, fairness, robustness, and transparency.

Key parts of AI assurance include (1) frameworks and standards that set the criteria against which AI models/systems are assessed during assurance; (2) processes, tools, and metrics used to identify AI risks or impacts, or to assess AI against the relevant criteria; and (3) experts with the skills and resources needed to conduct evaluations, audits and other relevant activities, including market-based entities supplying services for a fee, or independent researchers from academia, civil society, or other sectors.

In our latest paper, Strengthening the AI Assurance Ecosystem, we identify priority areas for policy action, including:

    • Investing at both the ecosystem level and the assurance-component level to advance trust and innovation.
    • Establishing assurance as part of the AI lifecycle to make it an iterative process and not just a final step in product development.
    • Improving access to models, systems, and documentation by external assurers and funding spaces for collaboration across industry, civil society, and academia.
    • Strengthening governance and global standards, and empowering AI safety institutes.

Unfortunately, the AI assurance ecosystem today is fragmented and incomplete, characterized by two primary challenges identified in our research. First, there is a distinct lack of professionalization, including a lack of clear professional pathways, codes of conduct, accreditation frameworks, and oversight bodies. Second, there is insufficient market demand for independent assurance and persistent barriers to key assurance inputs, such as models, systems, other AI components, and documentation, compounded by insufficient incentives for AI actors to grant such access.

And in the Global South, many countries face acute challenges in building domestic AI assurance ecosystems. In our newly published policy brief, Closing the AI Assurance Divide: Policy Strategies for Developing Economies, we note several of these hurdles, including the vast diversity within and across countries, spanning languages, cultural values, AI risk profiles, and tolerance levels, as well as legal and policy frameworks applicable to AI. We also recognize capacity constraints due to the resource-intensive nature of AI assurance, including the time, infrastructure, and highly specialized multidisciplinary skills that it requires. These issues are further exacerbated by legal and political hurdles, including regulatory gaps, insufficient market incentives, limited representation in international forums, and concerns about diplomatic backlash.

Civil society must have a seat not just on panel discussions but at the table where official outcomes . . . are negotiated and drafted.

Overcoming those challenges requires concerted multistakeholder action by governments, AI developers and deployers, and assurance practitioners across both developing and developed economies. While there is no one-size-fits-all approach to AI assurance, developing countries need flexible, locally grounded strategies, built through participatory processes with experts and communities on the ground, that balance technical rigor, socio-technical sensitivity, and cost-effectiveness.

The Opportunity in India and Beyond

The India AI Impact Summit offers a unique opportunity for governments, industry, and civil society to engage on these challenges with stronger participation from Global South stakeholders. In that sense, it represents an important first step toward closing the Global AI Divide.

But achieving this ambitious goal requires that the discussions and key points raised on assurance don’t end in New Delhi. It requires strengthening the full AI assurance ecosystem through sustained multistakeholder collaboration and, in particular, ensuring that civil society and affected communities have a meaningful voice not just in conversations, but in the decisions that shape AI’s global future.

PAI’s AI assurance workstream, which launched in February 2025 during the AI Action Summit in Paris, provides resources to guide holistic policymaking and to identify key needs to advance progress in developing the AI assurance ecosystem at regional, national, and global levels. Building trust in AI is essential; there must be confidence that these systems are safe, reliable, and fair to unlock their full potential.