Organized by Ada Lovelace Institute
The Ada Lovelace Institute was established by the Nuffield Foundation in early 2018, in collaboration with the Alan Turing Institute, the Royal Society, the British Academy, the Royal Statistical Society, the Wellcome Trust, Luminate, techUK and the Nuffield Council on Bioethics.
The mission of the Ada Lovelace Institute is to ensure that data and AI work for people and society. They believe that a world where data and AI work for people and society is a world in which the opportunities, benefits and privileges generated by data and AI are justly and equitably distributed and experienced.
In many corporate and academic research institutions, one of the primary mechanisms for assessing and mitigating ethical risks is the use of Research Ethics Committees (RECs), also known in some regions as Institutional Review Boards (IRBs) or Ethics Review Committees (ERCs).
However, the current role, scope and function of most academic and corporate RECs are insufficient for the myriad of ethical challenges that AI and data science research can pose. For example, the scope of REC reviews is traditionally only on research involving human subjects.
A recent report from Ada Lovelace explores the role that academic and corporate RECs play in evaluating AI and data science research for ethical issues, and also investigates the kinds of common challenges these bodies face.
The report draws on two main sources of evidence: a review of existing literature on RECs and research ethics challenges, and a series of workshops and interviews with members of RECs and researchers who work on AI and data science ethics.
Some of the questions discussed in this event include:
- How well are RECs at universities and private AI labs addressing the full range of risks that AI and data science research pose?
- What are the most common types of risks and challenges that AI and data science research are posing for RECs?
- How might RECs need to change their structure and makeup to assess for these risks?
- How can RECs assess for the broader societal impacts or downstream risks of AI and data science research?
- What kinds of funding or support do RECs need to create more assurance that AI and data science research is safe?
You can watch a recording of the event below.
Lead for Safety Critical AI