About
Research
Opportunities
Team
Analysis
Alumni
Updates
Donate

Robert Trager

International Governance Lead

robert.trager@governance.ai

Robert is Director of the Oxford Martin AI Governance Initiative at the University of Oxford and a Senior Research Fellow at Oxford’s Blavatnik School of Government. He is a recognised expert in the international governance of emerging technologies, diplomatic practice, institutional design, and technology regulation. He regularly advises government and industry leaders on these topics. Much of his recent research has focused on international governance regimes for AI. Previously, he was Professor of Political Science at the University of California, Los Angeles.

‍

Featured Publications

Political Science and International Relations

Voice and Access in AI: Global AI Majority Participation in Artificial Intelligence Development and Governance

This white paper investigates practical remedies to increase voice in and access to AI governance and capabilities for the Global AI Majority, while addressing the security and commercial concerns of frontier AI states.

AI Regulation

Governing Through the Cloud

Compute providers can play an essential role in a regulatory ecosystem via four key capacities: as securers, safeguarding AI systems and critical infrastructure; as record keepers, enhancing visibility for policymakers...

AI Lab Policy

Structured Access for Third-Party Research on Frontier AI Models

Recent releases of frontier artificial intelligence (AI) models have largely been gated, providing the benefit of limiting the proliferation of increasingly powerful dual-use capabilities. However, such release strategies...

International institutions and agreements

International Governance of Civilian AI

This report describes trade-offs in the design of international governance arrangements for civilian artificial intelligence (AI) and presents one approach in detail.

Political Science and International Relations

Safety-Performance Tradeoff Model Web App

This web app is a tool for exploring the dynamics of risky AI competition: the safety-performance tradeoff. Will AI safety breakthroughs always lead to safer AI systems? Before long, we may be capable of creating AI systems...

Political Science and International Relations

The Security Governance Challenge of Emerging Technologies

In recent decades, governments have been ineffectual at regulating dangerous emerging technologies like lethal autonomous weapons and synthetic biology. In today’s era of great power competition...

Political Science and International Relations

Deliberating Autonomous Weapons

Stuart Russell has had a seminal influence on many people with his writings on aligning artificial intelligence with human values and regulating autonomous weapons systems. His work has highlighted the difficulties...

Political Science and International Relations

Lethal Autonomous Weapons Need to be Regulated - But Not the Way Advocates Say

The development of lethal autonomous weapons systems (LAWS) is no longer just a concept of science fiction. With the potential of these weapons to cause destruction, there is an urgent need for an international approach to...

Political Science and International Relations

Information Hazards in Races for Advanced Artificial Intelligence

We study how the information environment affects races to implement a powerful new technology such as advanced artificial intelligence. In particular, we analyse a model in which a potentially unsafe technology may cause...

Political Science and International Relations

Safety Not Guaranteed: International Strategic Dynamics of Risky Technology Races

The great powers appear to be entering an era of heightened competition to master security-relevant technologies in areas such as AI. This is concerning because deploying new technologies can create substantial...

Political Science and International Relations

The IAEA Solution: Knowledge Sharing to Prevent Dangerous Technology Races

The world appears to be entering an era of heightened great power technological competition in areas such as artificial intelligence. This is concerning because deploying new technologies often involves private benefits and...

Security

Autonomous Weapons And Coercive Threats

Governments across the globe have been quick to adapt developments in artificial intelligence to military technologies. Prominent among the many changes recently introduced, autonomous weapon systems pose important new...

Political Science and International Relations

The Suffragist Peace

Preferences for conflict and cooperation are systematically different for men and women: across a variety of contexts, women generally prefer more peaceful options and are less supportive of making threats and initiating...

Political Science and International Relations

The Offense-Defense Balance and The Costs of Anarchy: When Welfare Improves Under Offensive Advantage

A large literature has argued that offensive advantage makes states worse off because it can induce a security dilemma, preemption, costly conflict, and arms races. We argue instead that state welfare is u-shaped under...

Featured Analysis

Commentary

What Success Looks Like for the French AI Action Summit

Success at next month’s global AI summit in Paris hinges on three key outcomes: 1) ensuring the AI Summit Series' future, 2) securing senior US and Chinese engagement, and 3) demonstrating...

Research Posts

Proposing an International Governance Regime for Civilian AI

A new report proposes creating an international body that can certify compliance with international standards on civilian AI.

Commentary

Goals for the Second AI Safety Summit

The second AI Safety Summit is an opportunity to reinforce the world’s commitment to an ambitious summit series.

Research Posts

Frontier AI Regulation

The next generation of foundation models could threaten public safety. We explore the challenges of regulating frontier AI models and the building blocks of a potential regulatory regime.

Research Posts

Computing Power and the Governance of AI

Recent AI progress has largely been driven by increases in the amount of computing power used to train new models. Governing compute could be an effective way to achieve AI policy goals...

©2024 Centre for the Governance of AI, Inc.
Centre for the Governance of AI, Inc. (GovAI US) is a section 501(c)(3) organization in the USA (EIN 99-4000294). Its subsidiary, Centre for the Governance of AI (GovAI UK) is a company limited by guarantee registered in England and Wales (with registered company number 15883729).
PrivacyCookie Policycontact@governance.ai