About
Research
Opportunities
Team
Analysis
Alumni
Updates
Donate

Allan Dafoe

Affiliate

https://www.allandafoe.com/

Allan is the founder and former Director of GovAI. He now serves as Director of Frontier Safety and Governance at Google DeepMind.

‍

‍

‍

Featured Publications

Introductions

Democratising AI: Multiple Meanings, Goals, and Methods

This paper outlines four different notions of “AI democratisation”, three of which are used almost synonymously with “increasing accessibility”. The democratisation of AI use and the democratisation of AI development are about...

Political Science and International Relations

Safety-Performance Tradeoff Model Web App

This web app is a tool for exploring the dynamics of risky AI competition: the safety-performance tradeoff. Will AI safety breakthroughs always lead to safer AI systems? Before long, we may be capable of creating AI systems...

Survey Research

Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers

Advances in artificial intelligence (AI) are shaping modern life, from transportation, health care, science, finance, to the military. Forecasts of AI development could help improve policy- and decision making. We report...

Political Science and International Relations

Safety Not Guaranteed: International Strategic Dynamics of Risky Technology Races

The great powers appear to be entering an era of heightened competition to master security-relevant technologies in areas such as AI. This is concerning because deploying new technologies can create substantial...

Political Science and International Relations

The Suffragist Peace

Preferences for conflict and cooperation are systematically different for men and women: across a variety of contexts, women generally prefer more peaceful options and are less supportive of making threats and initiating...

Political Science and International Relations

The Offense-Defense Balance and The Costs of Anarchy: When Welfare Improves Under Offensive Advantage

A large literature has argued that offensive advantage makes states worse off because it can induce a security dilemma, preemption, costly conflict, and arms races. We argue instead that state welfare is u-shaped under...

AI Progress and Forecasting

Emerging Institutions for AI Governance: AI Governance in 2020

Much AI governance work involves preparation for a constitutional moment: an opportunity to create long-lasting, decision-shaping, institutions. Doing this well is a formidable task. It requires a fine balance. Institutions...

Survey Research

Skilled and Mobile: Survey Evidence of AI Researchers' Immigration Preferences

Countries, companies, and universities are increasingly competing over top-tier artificial intelligence (AI) researchers. Where are these researchers likely to immigrate and what affects their immigration decisions? We...

Survey Research

Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers

Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including taking action against what they perceive to be unethical uses of AI...

Political Science and International Relations

The Logic of Strategic Assets: From Oil to AI

What resources and technologies are strategic? Policy and theoretical debates often focus on this question, since the “strategic” designation yields valuable resources and elevated attention. The ambiguity of the very concept...

Political Science and International Relations

Engines of Power: Electricity, AI, and General-Purpose Military Transformations

Major theories of military innovation focus on relatively narrow technological developments, such as nuclear weapons or aircraft carriers. Arguably the most profound military implications of technological change, however, come...

Political Science and International Relations

Coercion and the Credibility of Assurances

What makes coercion succeed? For most international relations scholars, the answer is credible threats. Yet scholars have neglected a second key component of successful coercion: credible assurances. This article makes two...

Political Science and International Relations

Coercion and Provocation

Threats and force, by increasing expected costs, should reduce the target’s resolve. However, they often seem to increase resolve. We label this phenomenon provocation. We review instances of apparent provocation in interstate...

Political Science and International Relations

Reputations for Resolve and Higher-Order Beliefs in Crisis Bargaining

Reputations for resolve are said to be one of the few things worth fighting for, yet they remain inadequately understood. Discussions of reputation focus almost exclusively on first-order belief change—A stands firm, B updates...

Macrostrategy

Cooperative AI: Machines Must Learn to Find Common Ground

To help humanity solve fundamental problems of cooperation, scientists need to reconceive artificial intelligence as deeply social. Artificial-intelligence assistants and recommendation algorithms interact with billions of peop...

Other Technologies

Contact Tracing Apps Can Help Stop Coronavirus. But They Can Hurt Privacy

Governments around the world are busy talking about the critical next steps — how to keep people safe from the coronavirus as economies start to reopen. In the United States and in Europe, this involves officials looking at how...

Security

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American S...

Featured Analysis

Research Posts

Preliminary Survey Results: US and European Publics Overwhelmingly and Increasingly Agree That AI Needs to Be Managed Carefully

Preliminary findings from a survey with over 13,000 people in 11 countries show that an overwhelming majority in Europe and the US there is a need for careful management of AI.

©2024 Centre for the Governance of AI, Inc.
Centre for the Governance of AI, Inc. (GovAI US) is a section 501(c)(3) organization in the USA (EIN 99-4000294). Its subsidiary, Centre for the Governance of AI (GovAI UK) is a company limited by guarantee registered in England and Wales (with registered company number 15883729).
PrivacyCookie Policycontact@governance.ai