While developing OPT-175B, a large language model with 175 billion parameters, AI researchers at Meta, a Partnership on AI (PAI) Partner, knew they wanted to ground the project in principles of transparency, reproducibility, and responsibility. And when it came time to put those principles into practice, Meta’s researchers turned to PAI’s resources for guidance.
“In a rapidly advancing field like AI, it can be difficult for researchers to follow best practices because often those practices haven’t been established yet,” said Susan Zhang, one of Meta’s lead researchers on OPT-175. “While seeking to responsibly publish our research, we were fortunate to have PAI’s thoughtful recommendations.”
Large language models are predictive AI systems trained on large amounts of text, enabling some of today’s most powerful text generation tools. OPT-175B’s developers believe that the best way to understand the downstream consequences of models like these is by opening them up to wider scrutiny. “OPT” itself stands for “Open Pretrained Transformer” and is the first model of this size to be released under a noncommercial license with all of the code necessary to train it. But what information is appropriate to disclose and how can this be done responsibly? To help answer these questions, researchers at Meta consulted the PAI white paper “Managing the Risks of AI Research,” citing it as a key source of guidance.
Published in May 2021, the white paper provides six specific recommendations for fostering a responsible AI research community. These recommendations synthesize key themes that emerged from PAI’s research, convenings, and consultations with a diverse set of stakeholders, including Meta and other PAI Partners.
Among other recommendations, the paper highlights the need for researchers to disclose additional information in their papers, such as computational resources used during development. In the blog post announcing the new model, Meta researchers noted that they were “releasing all our notes documenting the development process” as well as “how much compute was used to train OPT-175B and the human overhead required.”
“It’s our hope that the publication of OPT-175B will help provide a model for transparency that others in AI can follow,” said Joelle Pineau, managing director at Meta AI. “We need to include many voices if we want to have a responsible AI community. It’s clear that PAI’s work is bringing together that diversity of voices.”
PAI is committed to building a world where all people share in the benefits of AI, working with Partners and others to create change in practice. By developing actionable guidance and encouraging its adoption, we look forward to a future where responsible AI practices are the norm.