About
Research
Opportunities
Team
Analysis
Alumni
Updates
Donate

Markus Anderljung

Director of Policy and Research

markus.anderljung@governance.ai
https://www.markusanderljung.com/

Markus leads research at GovAI with a focus on how governments, AI companies, and other stakeholders can manage a transition to a world with advanced AI. He is currently serving as one of the Vice-Chairs drafting the EU's Code of Practice for General Purpose AI, and was previously seconded to the UK Cabinet Office as a Senior AI Policy Specialist, advising on the UK's regulatory approach to AI. He is also an Adjunct Fellow at the Center for a New American Security and a member of the OECD AI Policy Observatory's Expert Group on AI Futures. His research has been published in leading journals including Science and Nature Machine Intelligence, and presented to, for example, the Brookings Institution and Bipartisan Policy Center.

‍

Featured Publications

AI Regulation

In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI

The widespread deployment of general-purpose AI (GPAI) systems introduces significant new risks. Yet the infrastructure, practices, and norms for reporting flaws in GPAI systems remain seriously underdeveloped...

AI Regulation

On Regulating Downstream AI Developers

Downstream developers - actors who fine-tune or otherwise modify foundational models - can create or amplify risks from foundation models by improving their capabilities or compromising safety features...

Technical AI Governance

Infrastructure for AI Agents

Increasingly many AI systems can plan and execute interactions in open-ended environments, such as making phone calls or buying online goods. As developers grow the space of tasks that such AI agents can accomplish...

Technical AI Governance

IDs for AI Systems

AI systems are increasingly pervasive, yet information needed to decide whether and how to engage with them may not exist or be accessible. A user may not be able to verify whether a system has certain safety certifications...

Technical AI Governance

Visibility into AI Agents

Increased delegation of commercial, scientific, governmental, and personal activities to AI agents—systems capable of pursuing complex goals with limited supervision—may...

AI Regulation

Safety Cases for Frontier AI

As frontier artificial intelligence (AI) systems become more capable, it becomes more important that developers can explain why their systems are sufficiently safe. One way to do so is via safety cases: reports that...

AI Regulation

A Grading Rubric for AI Safety Frameworks

Over the past year, artificial intelligence (AI) companies have been increasingly adopting AI safety frameworks. These frameworks outline how companies intend to keep the potential risks associated with developing and deploying...

AI Regulation

From Principles to Rules: A Regulatory Approach for Frontier AI

Several jurisdictions are starting to regulate frontier artificial intelligence (AI) systems, i.e. general-purpose AI systems that match or exceed the capabilities present in the most advanced systems. To reduce risks...

AI Regulation

Risk Thresholds for Frontier AI

Frontier artificial intelligence (AI) systems could pose increasing risks to public safety and security. But what level of risk is acceptable? One increasingly popular approach is to...

Policy Advice and Opinion

Tort Law and Frontier AI Governance

Matthew van der Merwe, Ketan Ramakrishnan, and Markus Anderljung recently published this piece on Lawfare.

AI Regulation

Societal Adaptation to Advanced AI

Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse. This approach becomes less feasible as the number of developers of advanced AI grows.

Policy Advice and Opinion

Response to the RFI Related to NIST's Assignments Under the Executive Order Concerning AI

We welcome the opportunity to respond to the Request for Information (RFI) Related to NIST’s Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning AI. We offer the following submission for your...

AI Regulation

Computing Power and the Governance of Artificial Intelligence

Recent AI progress has largely been driven by increases in the amount of computing power used to train new models. Governing compute could be an effective way to achieve AI policy goals, but could also introduce new risks.

AI Regulation

Frontier AI Regulation: Safeguards Amid Rapid Progress

To deal with the risks of these high-compute frontier AI systems, we must govern not only how they can be used but also how they are developed and made available to people in the first place.

Policy Advice and Opinion

Towards Publicly Accountable Frontier LLMs

With the increasing integration of frontier large language models (LLMs) into society and the economy, decisions related to their training, deployment, and use have far-reaching implications...

Policy Advice and Opinion

How to Prevent an AI Catastrophe

Markus Anderljung and Paul Scharre recently published this piece in Foreign Affairs

AI Regulation

Frontier AI Regulation: Managing Emerging Risks to Public Safety

Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term “frontier AI” models — highly capable...

AI Regulation

National Priorities for Artificial Intelligence (Response to the OSTP Request for Information)

We welcome the opportunity to respond to the OSTP Request for Information on National Priorities for Artificial Intelligence and look forward to future opportunities to provide additional input. We offer the following...

AI Regulation

Response to the NTIA AI Accountability Policy

We welcome the opportunity to respond to the NTIA’s AI Accountability Policy Request for Comment and look forward to future opportunities to provide additional input. We offer the following...

Survey Research

Towards Best Practices in AGI Safety and Governance

A number of leading AI companies, including OpenAI, Google DeepMind, andAnthropic, have the stated goal of building artificial general intelligence (AGI)—AI systems that achieve or exceed human performance...

AI Regulation

Protecting Society from AI Misuse: When are Restrictions on Capabilities Warranted?

Artificial intelligence (AI) systems will increasingly be used to cause harm as they grow more capable. In fact, AI systems are already starting to be used to automate fraudulent activities, violate human rights, create...

AI Regulation

Response to the UK's Future of Compute Review

We are pleased to see the publication of the UK’s Future of Compute Review. However, we also believe there is a significant missed opportunity: the review does not address how to ensure that compute is used responsibly or how...

Survey Research

Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers

Advances in artificial intelligence (AI) are shaping modern life, from transportation, health care, science, finance, to the military. Forecasts of AI development could help improve policy- and decision making. We report...

Law and Policy

The Brussels Effect and Artificial Intelligence

In this report, we ask whether the EU’s upcoming regulation for AI will diffuse globally, producing a so-called “Brussels Effect”.

Policy Advice and Opinion

GovAI Response to the Future of Compute Review - Call for Evidence

We welcome the opportunity to respond to the Future of Compute Review’s call for evidence...Our response focuses on the future of compute used for Artificial Intelligence (AI). In particular, we emphasise the risks posed by...

Policy Advice and Opinion

Submission to the Request for Information (RFI) on Implementing Initial Findings and Recommendations of the NAIRR Task Force

This report gives comments on the interim report of the National AI Research Resource (NAIRR) Task Force. The key recommendations are: Provides researchers with access to pre-trained models by providing infrastructure...

Policy Advice and Opinion

Submission to the NIST AI Risk Management Framework

This report gives comments on the Initial Draft of the NIST AI Risk Management Framework (AI RMF). The key recommendations are to put more emphasis on low-probability, high-impact risks, especially catastrophic risks to...

Policy Advice and Opinion

Filling gaps in trustworthy development of AI

The range of application of artificial intelligence (AI) is vast, as is the potential for harm. Growing awareness of potential risks from AI systems has spurred action to address those risks while eroding confidence in AI...

Policy Advice and Opinion

Futureproof: Artificial Intelligence Chapter

Out of the wreckage of the Second World War, the UK transformed itself. It rebuilt its shattered economy. It founded the NHS. It created national insurance. And it helped establish international institutions like the United...

Survey Research

Skilled and Mobile: Survey Evidence of AI Researchers' Immigration Preferences

Countries, companies, and universities are increasingly competing over top-tier artificial intelligence (AI) researchers. Where are these researchers likely to immigrate and what affects their immigration decisions? We...

Survey Research

Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers

Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including taking action against what they perceive to be unethical uses of AI...

Featured Analysis

Commentary

What Success Looks Like for the French AI Action Summit

Success at next month’s global AI summit in Paris hinges on three key outcomes: 1) ensuring the AI Summit Series' future, 2) securing senior US and Chinese engagement, and 3) demonstrating...

Research Posts

Preventing Harms From AI Misuse

Risks from the misuse of AI will continue to grow. While there are multiple ways to reduce misuse risks, restricting access to some AI capabilities will likely become increasingly necessary.

Research Posts

Preliminary Survey Results: US and European Publics Overwhelmingly and Increasingly Agree That AI Needs to Be Managed Carefully

Preliminary findings from a survey with over 13,000 people in 11 countries show that an overwhelming majority in Europe and the US there is a need for careful management of AI.

Commentary

Goals for the Second AI Safety Summit

The second AI Safety Summit is an opportunity to reinforce the world’s commitment to an ambitious summit series.

Research Posts

Frontier AI Regulation

The next generation of foundation models could threaten public safety. We explore the challenges of regulating frontier AI models and the building blocks of a potential regulatory regime.

Research Posts

Computing Power and the Governance of AI

Recent AI progress has largely been driven by increases in the amount of computing power used to train new models. Governing compute could be an effective way to achieve AI policy goals...

Research Posts

New Survey: Broad Expert Consensus for Many AGI Safety and Governance Practices

Our survey of 51 leading experts from AGI labs, academia, and civil society found overwhelming support for many AGI safety and governance practices.

Research Posts

Compute Funds and Pre-trained Models

The US National AI Research Resource should provide structured access to models, not just data and compute

©2024 Centre for the Governance of AI, Inc.
Centre for the Governance of AI, Inc. (GovAI US) is a section 501(c)(3) organization in the USA (EIN 99-4000294). Its subsidiary, Centre for the Governance of AI (GovAI UK) is a company limited by guarantee registered in England and Wales (with registered company number 15883729).
PrivacyCookie Policycontact@governance.ai