3.4.2 Suggested Documentation Sections for Models

3.4.2 Suggested Documentation Sections for Models

Machine learning models use statistical techniques to make predictions based on known inputs.Momin M. Malik. (2019). Can algorithms themselves be biased? Medium. https://medium.com/berkman-klein-center/can-algorithms-themselves-be-biased-cffecbf2302c They are incorporated into many real-world systems and business processes where prediction and estimation is valuable.

Model transparency is important because ML models are used in making decisions, and a society can only be accountable and fair if the decision-making within it is understandable, accountable, and fair. Clarifying the basis of a recommendation helps in achieving these objectives. When people know that the models they are designing will be accountable and understandable, they have strong reasons to aim at fairness.

Model documentation becomes even more important as machine learning gets incorporated into systems making high-stakes decisions. For example, some states in the US are implementing ML-based risk assessment tools in the criminal justice system. From a societal perspective, it is important that any products with so much potential impact on individual well-being are accountable to the people they impact, so it is particularly untenable for these products to remain wholly “black boxes.” Other high-stakes applications of machine learning include models that determine the distribution of public benefits, models used in the healthcare industry that impact consumer premiums under risk-based payment models, or facial recognition models used by law enforcement.

The documentation steps outlined in this section apply to models built on static data, which does not change after being recorded, using various methods including supervised learning, unsupervised learning, and reinforcement learning. While these models are also relevant, at this time, the guidelines below are less applicable to models that use streaming data, such as online learning models, where the dataset or metrics are dynamically changing.

It is very important to tailor the documentation to meet the specific goal of disclosing model-related information, including considering the most relevant audiences for achieving that goal. If the key audience is end users of a consumer-facing product, the level of disclosure should be less technical to avoid overwhelming the users. In particular, companies should avoid making disclosures so complicated that they reach a similar status as Terms of Service  (ToS) documents, which unfortunately can be so cumbersome that they serve only to protect institutions rather than inform or help the users. Policymakers and advocacy groups can play a role in ensuring that transparency disclosures do not evolve in that direction. In contrast, if the largest audience for a set of ML documentation is other developers at the same company, the disclosures can be much more technical and detailed. Of course, various details differ depending on the audience and context of use — one of the goals of later establishing best practices is to outline the requirements and expectations for transparent documentation in various common scenarios. For example, a non-technical one-pager may be suitable for the average consumer but is insufficient as an auditable document for policymakers and advocacy groups in high-stakes contexts.

Internal disclosures can be helpful to allow developers from the same organization to learn from each other’s work. That said, internal disclosures should be careful to avoid legitimizing or spreading bad practices. The company should work independently to set and enforce high standards for models by making sure to provide enough human and capital resources to support the integration of transparency practices.

A common theme throughout this section is the importance of ensuring that the model disclosures do not create security or IP risks. Depending on what information about the model is disclosed and whether the documentation is for internal vs. external consumption, there might be concerns that malicious actors might use this information to attack the system more effectively or that the company’s trade secret protections might be compromised.

Once a measurement becomes a target, it is no longer a good measurement.

Finally, developers should be wary of Goodhart’s Law when making model-related disclosures. Goodhart’s Law suggests that once a measurement becomes a target, it is no longer a good measurement. In this context, the worry would be that disclosing the details of the model might incentivize individuals to game the system by adjusting their actions to achieve their desired outcome. For example, Goodhart’s Law has been observed in current academic publishing practices, with researchers gaming metrics intended to measure academic publishing success by increasing the number of self-citations, slicing studies into the smallest quantum acceptable for publication, and indexing false papers.Fire, Michael, and Carlos Guestrin (2019). “Over-Optimization of Academic Publishing Metrics: Observing Goodhart’s Law in Action.” GigaScience 8 (giz053). https://doi.org/10.1093/gigascience/giz053.

Another problematic unintended consequence would be if companies hid key information by disclosing a high volume of less crucial information, which highlights the importance of looking at ML documentation as a process to follow which aims to prompt deep reflection about the impact of products that include ML models where documentation artifacts are a byproduct, rather than documentation for the sake of being able to claim that documentation was created.

3.4.2.1 Model Specifications

Specifications

We borrow from Vogelsang and Borg (2019) and note that model specifications can include information about:

  • Quantitative targets
  • Data requirements
  • Explainability
  • Freedom from discrimination
  • Legal and regulatory constraints
  • Quality requirements
3.4.2.1 Model Specifications

This section assumes that the intention for building the model has been documented earlier in the process, including task and system specification. There are three subjects to consider in specification:

  1. about building models,
  2. about evaluating models, and
  3. extra specifications for models used in high-stakes or high-risk scenarios.Vogelsang, A., & Borg, M. (2019, September). Requirements engineering for machine learning: Perspectives from data scientists. In 2019 IEEE 27th International Requirements Engineering Conference Workshops (REW) (pp. 245-251). IEEE

Within building models, key questions to document include the choice of structure (e.g., features, architecture, pretrained embeddings and other complex inputs), choice of output structure, choice of loss function and regularization, where random seeds come from and where they are saved, hyperparameters, optimization algorithm, and generalizability measured by how much difference in test they expect to see.

Generalizability

Generalization usually refers to the ability of an algorithm to be effective across a range of inputs and applications. It is related to repeatability in that we expect a consistent outcome based on the inputs.

To create good predictive models in machine learning that are capable of generalizing, one needs to know when to stop training the model so that it doesn’t overfit.

For evaluating models, it is key to discuss what kind of tests the model developer does regarding output, how the developer plans to identify and mitigate sampling bias (e.g., using a second source of truth to mitigate selection bias via reweighting), and how to evaluate model performance on real-world data relative to test set (what is the threshold of acceptable and what kind of use cases should be disallowed based on results).

If the use case involves high stakes for affected parties, it is essential to ensure and document that the choice of output structure and loss function appropriately encode and convey uncertainty both about predictions and across possible system goals.Eckersley, P. (2018). Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function). arXiv preprint arXiv:1901.00064.

Pros/Cons

The benefits of documenting these details of model specification include reproducibility, spotting potential failure modes, and helping people choose between models for different use cases. There are potential security risks with revealing certain types of information. Proactive communication and thoroughly explaining the severity of risk across the spectrum of documentation and sharing the risk mitigation plan may help to alleviate these concerns. The risk of revealing “trade secrets” applies more to black box models, as disclosing some of these specifications may make it easier for others to reverse-engineer the model and to thus obtain information that a company considers trade secret. Explorations related to the following research questions could uncover insights into barriers to implementation along with mitigation strategies to overcome those barriers.

Sample Documentation Questions
  • What is the intended use of the service (model) output? (Arnold et al. 2018)
  • Primary intended uses
  • Primary intended users
  • Out-of-scope and under-represented use cases
  • What algorithms or techniques does this service implement? (Arnold et al. 2018)
  • Model Details. Basic information about the model. (Mitchell et al. 2018)
  • Person or organization developing model and contact information
  • Model date
  • Model version
  • Model type
  • Information about training algorithms, parameters, fairness constraints or other applied approaches, and features
  • Paper or other resource for more information
  • Citation details
  • License

3.4.2.2 Model Training

3.4.2.2 Model Training

The focus of this stage in the ML lifecycle is on sharing how the model was architected and trained and the process that was used for debugging.

Choices of ML model architectures have numerous consequences that are relevant to downstream users so it is essential to document both the choices and the rationales behind them. Did the designers choose a random forest, recurrent network, or convolutional network, and why? What was the capacity of the model, how does it line up with the dataset size, and what are the risks of overfitting? What was being optimized for, and what regularization terms and methods were used?

Some particular considerations may apply to architectures for models that will be used for high-stakes purposes: the wrong choice of optimization function or prediction objective can create significant risks of unintended consequences in deployment. In general, sufficiently high-stakes ML systems should produce outputs that are explicitly uncertain both about predictionPartnership on AI. Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System, Requirement 5. and across different competing specifications of the system’s goals.Eckersley, P. (2018). Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function). arXiv preprint arXiv:1901.00064.https://arxiv.org/abs/1901.00064

A separate datasheet should be attached to all datasets used in this process, likely to include training data and validation data used while adjusting the model. If federated learning or other cryptographic privacy techniques are used in the model, the datasheet may need to be adapted accordingly. Key questions for the validation data include how closely the data match real-world distributions, whether relevant subpopulations are sufficiently represented in the data, and whether the validation set was a hold-out set or if there was an effort made to be more representative of the real-world data distribution. Additionally, documentation should note any preprocessing steps taken, such as calibration corrections.

Another option is to add a link to the source code, which is again more likely for academic and open source models than for industry/commercial models. It is important to document the version of all libraries used, github links, machine types, and hyperparameters involved in training. This increases reproducibility and helps future users of the model debug in case of difficulty. For very large datasets, sharing information about the compute platform and rationale behind hardware choices also helps future researchers and model developers to contextualize the model. Lastly, it is highly valuable to disclose how long the model took to train and with what magnitude of compute resources as this allows future researchers to understand what level of resourcing a similar model would require to build.

Pros/Cons

As mentioned above, much of the documentation in this section is for the purpose of allowing other parties to build similar models, increasing reproducibility. Information on compute and hardware resources used also gives researchers the ability to judge how accessible the model is.

Debugging is the other large benefit of such robust documentation. For example, if a model has 94% percent accuracy in training, but 87% in test, knowing the original settings allows evaluators to identify whether this difference in performance comes for different settings or from other factors. Any evaluation of performance, though, needs to keep in mind that test performance will always be worse than the training performance. Combining model documentation with datasheets for the training data gives evaluators information to rule out performance changes due to changes in data or parameters. The evaluators can be internal stakeholders from testing teams or external stakeholders such as customers who purchase the model for deployment in their business processes.

Finally, this information builds trust between research labs, the general public, and policymakers, as each party gains insight into how otherwise “black box” models were constructed. It also informs and educates the public on typical practices which can be important for later reputational considerations, ex ante regulation, or common law concepts of reasonableness.

Documentation for models can be both highly technical and lengthy, which runs into readability risks. It is important to present the information in a reader-friendly manner to ensure the hard work of documentation yields the benefits outlined above and to prevent creating burdensome documentation that pushes the work unnecessarily onto users and consumers of the model. Explorations related to the following research questions could uncover insights into barriers to implementation along with mitigation strategies to overcome those barriers and produce checkpoints for testing impacts to certain demographics.

Sample Documentation Questions
  • What training data is used? May not be possible to provide in practice. When possible, this section should mirror evaluation data. If such detail is not possible, minimal allowable information should be provided here, such as details of the distribution over various factors in the training datasets. (Mitchell et al. 2018)
  • What type of algorithm is used to train the model? What are the details of the algorithm’s architecture? (e.g., a ResNet neural net). Include a diagram if possible