| Pre-deployment/ on deployment |
Publicly report model impacts
“Key ingredient list”: including details of evaluations, limitations, risks, compute, parameters, architecture, training data approach, model documentation
Disclose performance benchmarks, intended use, risks and mitigations, testing and evaluation methodologies, environmental and labor impacts
Downstream use documentation: including appropriate uses, limitations, mitigations, safe development practices
Share red-teaming findings
|
Technical documentation: including information about training, testing, and evaluations
Documentation for downstream developers: including information about capabilities, limitations, and to aid downstream compliance
Public summary of training data
|
Report red-teaming results to Department of Commerce |
Multiple guidelines for documentation, including of:
- Risks and potential impacts
- Knowledge limits
- TEVV considerations & tools
- Measures of trustworthiness
- Residual risks after mitigations
- Model details
- Data curation policies
- Environmental impacts
|
Technical documentation
Transparency reports: with “meaningful information”
Instructions for Use
Technical Documentation
Documentation to include details of evaluations, capabilities/ limitations re: domains of use; risks to safety and society; red-teaming results
|
| Across lifecycle |
Iteration of model development
Provide documentation to government as required
Environmental Impacts
Severe labor market risks
Human rights impact assessments
|
N/A |
N/A |
Multiple guidelines to document processes and management systems |
“Work towards” information sharing and incident reporting, including on:
- Evaluation reports
- Safety & security risks
- “ensuring appropriate and relevant documentation and transparency across the AI lifecycle”
Document datasets, processes and decisions during development
Regularly update Technical Documentation
|