Webinar: How Should Frontier AI Models be Regulated?

In the Summer of 2023, GovAI hosted a webinar focused on a whitepaper: “Frontier AI Regulation: Managing Emerging Risks to Public Safety.” The paper's lead authors – GovAI's Head of Policy Markus Anderljung & OpenAI's Governance Research Scientist Cullen O'Keefe – were joined by Hugging Face's Irene Solaiman and Fast.ai's Jeremy Howard to discuss the paper. The event was hosted by GovAI Director Ben Garfinkel and facilitated by GovAI's Research Manager Emma Bluemke.

Webinar: How should frontier AI models be regulated? 

This webinar centred on a recently published whitepaper “Frontier AI Regulation: Managing Emerging Risks to Public Safety.” The paper argues that cutting-edge AI models (e.g. GPT-4, PaLM 2, Claude, and beyond) may soon have capabilities which could pose severe risks to public safety and that these models require regulation. It describes the building blocks such a regulatory regime would consist of and proposes some initial safety standards for frontier AI development and deployment. You can find a summary of the paper here.

We think the decision of whether and how to regulate frontier AI models is a high-stake one. As such, the webinar featured a frank discussion of the upsides and downsides of the proposals of the white paper. After a brief summary of the paper by Cullen O’Keefe and Markus Anderljung, two discussants – Irene Solaiman and Jeremy Howard – offered comments, followed by a discussion.

Within the webinar the discussants mention Jeremy's piece "AI Safety and the Age of Dislightenment", which you can read here.

Speakers:

Irene Solaiman is an AI safety and policy expert. She is Policy Director at Hugging Face, where she is conducting social impact research and leading public policy. She is a Tech Ethics and Policy Mentor at Stanford University and an International Strategy Forum Fellow at Schmidt Futures. Irene also advises responsible AI initiatives at OECD and IEEE. Her research includes AI value alignment, responsible releases, and combating misuse and malicious use.

Jeremy Howard is a data scientist, researcher, developer, educator, and entrepreneur. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible, and is an honorary professor at the University of Queensland. Previously, Jeremy was a Distinguished Research Scientist at the University of San Francisco, where he was the founding chair of the Wicklow Artificial Intelligence in Medical Research Initiative.

Cullen O’Keefe currently works as a Research Scientist in Governance at OpenAI. Cullen is a Research Affiliate with the Centre for the Governance of AI; Founding Advisor and Research Affiliate at the Legal Priorities Project; and a VP at the O’Keefe Family Foundation. Cullen's research focuses on the law, policy, and governance of advanced artificial intelligence, with a focus on preventing severe harms to public safety and global security.

Markus Anderljung is Head of Policy at GovAI, an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory's Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist and served as GovAI’s Deputy Director. Markus is based in San Francisco.

Footnotes
Further reading