AI Rules of the Road in 2024
Just as automobiles are an everyday part of our lives, so now is AI. Whether we are searching maps on our phones, typing questions into an online customer service bot, or prompting a generative AI application, all of us, in some way, are interacting daily and repeatedly with AI systems.
And, just as with road safety, leaders of organizations building and deploying AI must learn to drive defensively in order to accelerate their businesses with AI and reach new markets. The best vehicle drivers learn to anticipate danger, proactively manage the mistakes of others and respond to the conditions of the road in order to arrive at their destination quickly and safely. The best AI leaders must do the same.
While some questions remain and new policy action is required, the work of the global AI research community has shown broadly what good looks like when it comes to AI safety. Lawmakers, researchers and civil society organizations must continue to focus on AI safety. Importantly, it is now time for leaders in business, from start-ups to multinationals, to get their AI safety roadmap in place.
2024 Directions in AI
In a new Deloitte report, The State of Generative AI in the Enterprise: Now Decides Next, 79% of the surveyed corporate leaders said that they expected generative AI to substantially transform their company and industry over the next three years, and reported a combination of excitement and concern.
This shouldn’t be a surprise. 2023 was quite a year for AI. We saw a public debate about competing definitions and timelines for advanced intelligence systems. Generative AI beta and research releases became retail and enterprise products and services, while lawmakers updated old and developing new tools in their policy toolkits. Nonprofit organizations, such as the Partnership on AI (PAI), released clear guidance for ethical leaders in business with recommended responsible AI practices.
As the Deloitte survey shows us, the imperative for companies to act could not be clearer.
Building on the fascination with the tumult at OpenAI, Boards are also gearing up. With the power to hold senior management accountable for innovating responsibly, directors should expect to be in regular conversation with the executive team about their technology plan. CEOs will need a clear, well-communicated and regularly updated AI safety roadmap that is well understood and has the buy-in and support of their Board.
For companies, the EU AI Act and the comprehensive action plan emerging from the US Executive Order will also drive increased scrutiny.
It will be a particularly challenging year for policymakers in the dozens of countries holding elections this year, with more than half of the world’s people going to the polls. Policymakers in these countries will be focused on managing a clear electoral process despite the potential for disruptive, high quality deepfakes. At the same time, many continue to enact new laws to protect citizens from online harms while catalyzing competitiveness.
We will need business leaders to do their part. From model deployers to synthetic media startups to content platforms to news media, businesses must take action to ensure the health of our information ecosystems. To get started, they can deploy clear technical standards, disclose AI use proactively and respond to malicious acts quickly and clearly.
No longer just the concern of gig workers, generative AI’s impact on new categories of workers is also drawing attention. The importance of data enrichment workers in the AI value chain, often based outside of the West, came to light with the need for human intelligence to test and train these new foundation models. Shutting down Hollywood film and tv productions for four months, the SAG-AFTRA strike highlighted the concerns of creative workers about the potential impact of AI on jobs. The future of work is arriving and leaders everywhere need to make informed choices, with worker input, when introducing automation into the workplace.
With all of this, AI leaders could become more distracted than defensive drivers. All the more reason to get started.
A Roadmap for AI Safety
One of the biggest challenges for leaders will be to keep up to date on the evidence emerging on the societal risks and harms of AI as they assess their organization’s risk tolerance for novel innovation. This is why PAI has built communities of knowledge and action to meet today’s challenges.
For a primer on AI safety, see PAI’s Guidance for Safe Foundation Model Deployment. While developed in the first instance for AI model providers, the guidance provides both a comprehensive overview of how to scale oversight and a holistic approach to safety that includes bias, overreliance on AI systems, privacy, and worker treatment.
Leaders can get started by mapping the ecosystem of organizations involved in their AI pipeline, from compute and hardware suppliers to cloud, data and model providers, and, ultimately, application developers, consumers and other affected stakeholders. Where are the points of intervention for senior-level oversight, iterative testing and evaluation, and stronger controls?
Documentation is key to this analysis, for internal decision-making and monitoring as well as external auditing and accountability. Documenting best practice requires action across the lifecycle of AI systems from pre-deployment data security and research through to post-deployment monitoring, and incident reporting.
Navigating to Our Desired Destination
Most importantly, defensive driving requires that you get behind the wheel and go somewhere. For 2024, the global imperatives are clear. Starting with the UN’s 17 Sustainable Development Goals, we need to set our AI ingenuity and expectations high and engage creatively and inclusively with people and communities to get there.
In 2023, the urgency for corporate accountability, policy maker preparedness and public understanding of AI rose to new heights. If last year was our collective wake-up call, then 2024 must be our call to action. It is time for all of us to buckle up and accelerate our efforts.