AI has the potential to be a transformative technology. Continued progress will likely bring profound benefits, including economic growth, medical advancements, and other changes that could support human flourishing and security. However, this progress may also bring important risks.
Government bodies, technology companies, and other institutions are facing increasingly difficult decisions about how to respond to the challenges and opportunities presented by AI. We aim to help in two key ways. First, we produce relevant research to support informed decision-making. Second, we run fellowships and visitor programs to address talent gaps.
You can read more about our mission and how we pursue it in our most recent annual report.
GovAI was founded at Yale University in 2016, with Prof. Allan Dafoe as its Founding Director. We were one of the first academic groups to study the policy implications of AI progress. We became an academic center at the University of Oxford in 2018 and then spun out to become a non-profit in 2021.
GovAI is now led by Ben Garfinkel. Our team and affiliate community possess expertise in a wide variety of domains, including AI development best practices, risk analysis, the economics of AI, US-China relations, AI regulation, and AI progress forecasting. You can read more about our governance structure and approach to conflicts of interest here.
Our researchers have produced work on a broad range of subjects. However, our largest focus has been on general-purpose AI systems and their implications for security.
Researchers at GovAI seek to understand how general-purpose AI may both contribute to and mitigate risks in domains such as cybersecurity, biosecurity, and economic security. Our team looks at both the present-day implications of these AI systems and the implications they may have if progress continues. Ultimately, most of our research relates to the following themes:
Risk Analysis: What is the state of evidence regarding different hypothesized risks from AI – and what further information would be most useful for reducing uncertainty? At the same time, what role can AI progress and adoption play in mitigating risks?
Best Practices: How can AI companies make responsible development and deployment decisions, successfully managing risks without hindering innovation and adoption?
Public Policy: How can governments use their toolboxes to ensure that AI companies adopt responsible AI development practices, that unnecessary barriers to innovation and adoption are avoided, and that society is made resilient to large-scale changes from AI?
Our researchers have provided knowledge and assistance to decision makers in government, industry, and civil society. Our alumni have gone on to policy roles in government; top AI labs, including Google DeepMind, OpenAI, and Anthropic; and think-tanks such as the Center for Strategic and International Studies, Center for Security and Emerging Technology, and the Tony Blair Institute. Our initial research agenda, published in 2018, helped define and shape the nascent field of AI governance. Our research developing the framework of “cooperative AI" led to the creation of a $15 million philanthropic foundation. We made significant early contributions to the ongoing public discussions over the security implications of AI.
Our researchers have published in leading journals and conferences, including Science and NeurIPS. We have published commentary in venues such as War on the Rocks, The Washington Post, and Lawfare. Our work has also been covered by publications such as The New York Times, MIT Technology Review, and the BBC.