Our Blog
/
Blog

Developing General Purpose AI Guidelines: What the EU Can Learn from PAI’s Model Deployment Guidance

$hero_image['alt']

Foundation models, also known as general purpose AI, are large-scale models trained on vast amounts of data that serve as starting points or a ‘foundation’ for developing AI systems across domains. These models can power various applications from content generation to paving the way for interactive systems that will be capable of performing complex digital tasks autonomously. However, despite their immense utility, these general purpose AI systems can pose great risks to humanity if not developed, deployed, and used responsibly.

To ensure the safety of these systems, industry leaders, governments, and civil society must work together to develop practical guidance and regulation that can address these risks. A year ago, the UK government hosted the first global summit on AI safety. Ahead of the summit, PAI released the Guidance for Safe Foundation Model Deployment, laying the foundation for governance of general purpose AI. The Guidance offers customized sets of responsible practices for model providers to safely develop and deploy models, specific to the capabilities of the model they are developing and their release approach.

Over the past year, the EU AI Act has set a global precedent for comprehensive AI regulation. The newly established European AI Office is now developing a specific Code of Practice for general purpose AI models, taking a similar multistakeholder approach as PAI, inviting model providers, downstream developers, civil society organizations, and academic experts to weigh in on the development of the Code of Practice. The first draft of the Code, prepared by independent experts, was released earlier this month, and drafting will continue into April 2025.

Based on lessons learned from our multistakeholder process developing the Model Deployment Guidance in 2023 and our recent expanded guidelines addressing the full AI value chain, we offer three key considerations for the European AI Office and other policymakers crafting foundation model guidelines:

  1. Creating guidance that can be iterated upon:
    Policymakers should be prepared to revisit and iterate upon guidance. Like with our Oct 2023 version of the Model Deployment Guidance, which was the start of a public comment period, guidelines should be refined and updated to reflect the now widespread adoption of foundation models. Our public comment feedback highlighted that as these models become more widely available and adaptable, responsibility for safe development and deployment extends beyond model providers alone. As a result, we were able to issue expanded guidance specific to open foundation models.
  2. Tailor guidance to specific model and release types:
    AI is not a monolith and different models require their own tailored approaches, while not every foundation model warrants the same level of oversight. It is important that guidance is customizable and reflective of these nuances, for example, by providing distinct recommendations for frontier models (paradigm-shifting general purpose models that advance the current state of the art). Research releases demonstrating new techniques or concepts don’t require the same extensive guardrails as large-scale deployments that could impact millions of users. The capabilities of models and how they are released can significantly influence their potential societal impact. Our guidance reflects this by providing tailored recommendations across different scenarios – from frontier model releases requiring extensive safeguards to closed deployments where models are directly integrated into products without public release. This latter scenario may become increasingly common as some companies, following patterns seen with recommender systems, opt for internal deployment. That is why we developed a custom guidance generator. To make this even more accessible, we have now published Guidance Checklists for three scenarios that warrant distinct governance approaches.

    • Frontier x restricted: Comprehensive guidelines for paradigm-shifting foundation models requiring extensive safety measures
    • Advanced x open: Decentralized approaches emphasizing collaborative value chain governance
    • Frontier x closed: Focused guidelines for internal deployments where models are directly integrated into products without public release
  3. Expand governance beyond model providers:
    Following the public release of our initial guidance, we published expanded recommendations in 2024 that examine the roles and responsibilities of key actors across the open foundation model value chain. While model providers play a crucial role, effective governance must also address model adapters who customize these models, hosting services that make them accessible, and application developers who build end-user products. Our value chain analysis demonstrates how each of these actors contributes to and shares responsibility for safe AI development and deployment.

A multistakeholder process isn’t just beneficial – it’s essential for developing robust governance frameworks that can effectively shape responsible AI policy.

More than 40 global institutions, including model providers, civil society organizations, and academic institutions, participated to develop PAI’s Model Deployment Guidance. Our work continues as we have efforts currently underway to reconcile and make sense of the emerging policy landscape on foundation models. Our recent report on Policy Alignment on AI Transparency provides a detailed analysis of eight leading policy frameworks for foundation models, with a particular focus on documentation requirements, and offers recommendations to promote interoperability.

Looking ahead, our focus on agentic AI – systems capable of acting autonomously on behalf of users – will mark a significant next step in our work. As these systems grow more sophisticated, we must develop governance frameworks that ensure their trustworthy and ethical deployment. Understanding and addressing the distinct challenges posed by agentic AI will be crucial for the future of human-AI interaction. To stay up to date on our work in this space sign up for our newsletter.