Our Blog
/
Blog

Shaping the Future of AI Policy: A Q&A with David Wakeling

$hero_image['alt']

Read our other speaker Q&A from PAI’s AI Policy Forum here.

In September 2024, the Partnership on AI (PAI) held its AI Policy Forum in New York City, bringing together global thought leaders, industry leaders, and policymakers to discuss the evolving landscape of global AI governance. From ethical AI practices to global interoperability, these conversations reflected our efforts to shape inclusive policies worldwide. This new series of Q&As highlights some of the leaders who spoke at the Forum, offering deeper insights into their work on AI and governance. 

Today, we are sharing our interview with David Wakeling from A&O Shearman, who gave a Lightning Talk at the AI Policy Forum.

David Wakeling is a Partner at A&O Shearman and the Global Head of the firm’s Markets Innovation Group (MIG) and AI Advisory Practice. He advises some of the largest companies in the world on the safe deployment of AI in their businesses and is regarded as one of the leading AI lawyers globally. He is also leading on the development and implementation of A&O Shearman’s own AI strategy and chairs the firm’s AI Steering Committee, playing a pivotal role in the firm’s deployment of generative AI, which began in December 2022 with the rollout of Harvey, the LLM that has been fine-tuned for law. This move made the firm the first in the world to roll out generative AI at enterprise level.

 Under David’s leadership, ContractMatrix, which is an AI-powered contract drafting and negotiation tool, was developed by the Markets Innovation Group (MIG), in partnership with Microsoft and Harvey. This made A&O Shearman the first law firm globally to collaborate with Microsoft to develop an AI tool. ContractMatrix is used extensively within A&O Shearman and is also licensed to clients. Additionally, David leads the firm’s AI advisory practice. As part of this, he led on the formation of an AI working group, through which the firm is currently advising 80+ of the largest global businesses across the Middle East, APAC, US, UK and EU on the key legal, risk management and deployment issues raised by generative AI. David previously served as Co-chair of A&O Shearman’s Race and Ethnicity Committee.

In his talk at PAI’s AI Policy Forum, David highlighted the challenges associated with navigating what is currently a fragmented and unclear AI policy and regulatory landscape. He discussed the impact of businesses making decisions related to AI deployment without clear guidance and talked about the need for a unified, global, high-level AI policy framework. He stressed the importance of international cooperation between regulators and industry, to establish what this high-level AI policy framework, as well as domestic rules and regulations, mean in practice. He argued that effective international, multistakeholder collaboration would support the development of interoperable, consensus-based standards, which would enable responsible innovation.  

Please accept preferences, statistics, marketing cookies to watch this video.
David Wakeling’s talk at PAI’s AI Policy Forum

Thalia Khan: What is the biggest misconception the general public has about AI, and how can we better educate them?

David Wakeling: One misconception that springs to mind is that AI is going to take all of our jobs. This is problematic because it, understandably, instills fear in people. The AI systems that are available today have a lot of limitations, the fact they hallucinate being an obvious example of one. At A&O Shearman, we see AI as augmenting our lawyers and staff more broadly, as opposed to displacing them. There’s always an expert in the loop, who validates and finesses any AI output. Even as AI systems become increasingly sophisticated, AI will never be able to displace the work humans do entirely; there are many human skills and attributes that cannot be automated, such as the ability to think critically and strategically, as well as to build effective relationships. I think we can alleviate concerns by raising awareness of AI’s limitations, as well as by demystifying AI and improving AI literacy more broadly. 

TK: Why do you think multistakeholder collaboration is important in shaping AI policy, and how can we ensure all voices are heard?

DW: Multistakeholder collaboration is vital when developing AI policy; it’s why we were eager to become the sole legal adviser to the Partnership on AI (PAI). I’m a firm believer in the value of multidisciplinary groups; I lead the Markets Innovation Group (MIG) at A&O Shearman, which is made up of lawyers, technologists and developers, who work hand-in-glove to disrupt the legal industry. As we build and deploy AI systems (e.g. ContractMatrix, which we developed and launched in collaboration with Microsoft and Harvey, making us the first law firm globally to partner with Microsoft to develop an AI tool), our legal advice regarding AI is informed by deep technical expertise and an understanding of what works in practice. A similar, interdisciplinary approach is needed when developing AI policy; it can’t be done by policymakers in isolation. It’s crucial that the views, needs and concerns of all stakeholders impacted by AI adoption are considered, and by convening those with diverse perspectives, including individuals from academia, civil society and industry, through a coalition such as PAI, we have a shot at realising this. Meaningful public engagement on AI is also vital if we’re to ensure that all voices, including those from underrepresented groups, are heard. 

TK: What do you see as the most pressing issue in AI today and how can we address these issues?

DW: The most pressing issue in AI today is ensuring that AI systems are used in a way that is responsible, compliant and trustworthy. As AI systems become increasingly advanced and embedded within our society, the likelihood of risks associated with AI materialising – such as AI systems perpetuating or even exacerbating inequalities, making decisions that are difficult to explain or understand, and operating in ways that are not aligned with human values or intentions – will become greater. To help prevent this, organisations need to develop and implement responsible AI governance and have risk mitigation tools at their disposal. At A&O Shearman, I established, and lead, the firm’s AI advisory practice, as well as an AI working group for clients, to help organisations harness AI responsibly. Through our AI working group, we’re currently advising 80+ organisations on safe AI development and deployment. To date, we’ve helped clients enable transparency and fairness when using AI, mitigate IP infringement and litigation risks, navigate legal issues related to licensing AI systems, and much more. 

TK: As more non-tech industries adopt AI, what are some ethical challenges that companies should be prepared to address, and how can they navigate these challenges effectively?

DW: AI adoption raises a host of ethical challenges relating to bias and fairness, privacy, transparency, explainability and accountability. These challenges often intersect and overlap with legal risks associated with AI deployment. For example, bias is both a legal risk, if it leads to discrimination, and an ethical challenge – how do we enable fairness when using AI systems, beyond what is legally required? We had to grapple with both the ethical challenges and legal risks associated with AI deployment when we rolled out generative AI at enterprise level in December 2022, which made the firm the first to do so globally. Core to our approach was starting with a sandbox; I strongly recommend other organisations do this. We gave a select group access to Harvey, the LLM that has been fine-tuned for law, in a controlled setting. During this time, we identified use cases and risks, put governance mechanisms and a robust feedback loop in place, and conducted a rigorous InfoSec and tech architecture alignment programme, to ensure the AI system would be used in a way that was both compliant and ethical.

TK: What role do you see legal experts playing in shaping AI governance, particularly for industries that are just beginning to integrate these technologies?

DW: Lawyers have a crucial role to play in shaping AI governance. We can develop governance mechanisms that enable organisations to comply with existing legal requirements and regulatory standards, which vary significantly across different jurisdictions. Our market-leading, multijurisdictional AI advisory practice is also made up of experts across the full spectrum of risk management. In the absence of comprehensive legal frameworks on AI, we can help organisations to rationalise the legal, ethical and operational risks associated with AI development and deployment, in a way that unlocks value for their use case; this can be especially useful, and necessary, for those in the early stages of adoption. We’ve helped clients develop ethical AI principles, draft clear rules for AI use, as well as create risk assessment and wider governance frameworks. We take a holistic approach when developing AI governance mechanisms, drawing on the expertise of our in-house data scientists and developers where relevant, to ensure the advice we’re providing is technically, as well as legally and ethically, sound.