Our Blog
/

Shaping AI Transparency Processes with NIST

Informing Documentation Standards with PAI’s Enterprise Steering Committee

$hero_image['alt']

We are living through a pivotal moment in AI governance. 2026 marks the year that transparency obligations stop being aspirational and start being enforceable.

The EU AI Act’s transparency provisions, expected to take effect later this year, will require providers of AI systems to supply downstream deployers with documentation sufficient to meet their own compliance obligations. Providers of generative AI systems specifically must also mark AI-generated content in machine-readable formats, which is covered in the recently published Draft Code of Practice on marking and labelling of AI-generated content. This draft addresses broader labelling and disclosure requirements for generative AI providers and deployers, including disclosing when users are interacting with AI and labelling deepfakes. The final code is expected by June 2026. At the same time, in the United States, a patchwork of state-level AI regulations in Colorado, California, and Texas has introduced its own documentation, disclosure, and impact assessment requirements, even as the federal government pushes back on oversight.

This regulatory moment is arriving precisely when AI systems are becoming dramatically more capable and pervasive. According to a McKinsey report on the state of AI, sectors such as technology, media, telecommunications, and healthcare are showing the highest rates of adoption. 2025 was the year when attention shifted from large language models to AI agents and 2026 is the year when organizations must now grapple with what it means to integrate these systems responsibly.

With AI embedded in enterprise workflows, we must now ensure the infrastructure to govern it (standards, documentation, and accountability mechanisms) can keep pace. At PAI, we believe that documentation is not a bureaucratic box to check off. Rather, it is foundational to developing trust in responsible AI adoption.

Despite accelerating adoption, major barriers to responsible AI deployment persist. In a recent report, we identified three interconnected challenges: responsible AI adoption at scale, evaluation and compliance across the model lifecycle, and trust and collaboration across the AI value chain. We noted that for enterprises, these challenges are not abstract; responsible AI adoption correlates directly with return on investment, from reputational standing with customers to measurable financial outcomes. The breadth of use cases that AI is used for continues to expand, and industry benchmarks are largely overly aimed toward academia, and toward the model layer alone. This leaves a huge gap when enterprise teams are trying to measure their AI system’s performance for real world use cases, trying to upskill and build confidence in their teams, and trying to create and implement governance policies and legislation that won’t become outdated.

Documentation is one of the most concrete tools available to close this trust gap. Shared standards and documentation artifacts can give stakeholders — such as model providers, enterprise deployers, regulators, and end users — a common language for evaluating and governing AI systems.

“With AI embedded in enterprise workflows, we must now ensure the infrastructure to govern it can keep pace.”

Insights from a Listening Session with NIST

Agreeing on the need for documentation standards is the easy part. Developing the actual content of those standards, in a way that is technically rigorous, practically adoptable, and inclusive across sectors requires sustained multistakeholder engagement.

Earlier this year, we held a listening session with our Enterprise Steering Committee and NIST, the National Institute of Standards and Technology, regarding their documentation on system and data characteristics for transparency among AI actors. The session surfaced three key insights we believe are essential to getting documentation frameworks right:

  1. We need a balance between prescriptiveness and adaptability. One way to balance prescriptiveness with adaptability is through “profiles,” a concept introduced by NIST. Universal templates would define shared fields and authoritative definitions, while domain-specific profiles allow sectors like finance, healthcare, or the creative industries to specify how those fields apply in their context. This approach addresses a genuine tension: the level of detail useful in documentation varies enormously depending on the regulatory environment, model complexity, and risk profile. Importantly, the profile approach also decentralizes standard-setting, creating space for multiple stakeholders to shape documentation practices that are actually relevant to their work.
  2. Plain language and accessibility are essential for greater transparency across the value chain. AI documentation must be legible to non-technical audiences, including policymakers, internal business teams, and the public. Without accessible documentation, transparency becomes performative: information technically disclosed but difficult to interpret. Concrete suggestions included adding plain-language examples and mock templates that demonstrate best practices. This would enable informed decision-making across communities.
  3. Iterative development requires a multistakeholder audience. As NIST’s work moves from zero-draft to a final framework, piloting with enterprises beyond the technology sector is critical for validating usability and identifying gaps that would otherwise remain invisible. Frameworks need to be built through an inclusive stakeholder approach so that they can be useful beyond traditional tech audiences. This iterative and inclusive process will build standards that hold up in practice.

Documentation as an Ongoing Commitment

Regulatory momentum is accelerating, but regulation alone cannot build the ecosystem of trust that responsible AI adoption requires. And unlike regulation, multistakeholder documentation efforts can move at the speed of the technology, adapting as new systems emerge, new risks materialize, and the field learns what works.

Effective governance must be: integrated across enterprise functions, built through collaboration on shared standards, dynamic enough to keep pace with evolving capabilities, and grounded in human impact. Shared language makes governance integration possible. Common reference points enable meaningful collaboration. A record of decisions and changes is what allows governance to evolve as capabilities do. And centering human impact is only possible when you can trace how a system has actually been used, and what it has produced.

Documentation is one of the few mechanisms that travels with the technology itself — embedded in the artifacts that follow a model from development to deployment to post-deployment impact. Standardizing documentation is a step towards developing good governance, making responsible AI adoption possible at scale. At PAI, we’ll continue to support efforts to improve standards and practices – making documentation practical, accessible, and impactful, so that the impacts and risks of AI models can be understood and managed across the value chain. To stay up to date on our work in this area, sign up for our newsletter.