About
Research
Opportunities
Team
Analysis
Alumni
Updates
Donate

Jonas Schuett

Senior Research Fellow

jonas.schuett@governance.ai

Jonas leads our workstream on risk management. Before joining GovAI, he advised the UK Government on AI regulation, interned at Google DeepMind’s Public Policy team, and helped found the Institute for Law and AI (LawAI), where he is still a non-executive board member. He holds a law degree from Heidelberg University and is currently completing his PhD in law at Goethe University Frankfurt.

‍

Featured Publications

AI Regulation

On Regulating Downstream AI Developers

Downstream developers - actors who fine-tune or otherwise modify foundational models - can create or amplify risks from foundation models by improving their capabilities or compromising safety features...

AI Regulation

Safety Case Template for Frontier AI: A Cyber Inability Argument

Frontier artificial intelligence (AI) systems pose increasing risks to society, making it essential for developers to provide assurances about their safety. One approach to offering such assurances is through a safety case...

AI Regulation

Safety Cases for Frontier AI

As frontier artificial intelligence (AI) systems become more capable, it becomes more important that developers can explain why their systems are sufficiently safe. One way to do so is via safety cases: reports that...

AI Regulation

A Grading Rubric for AI Safety Frameworks

Over the past year, artificial intelligence (AI) companies have been increasingly adopting AI safety frameworks. These frameworks outline how companies intend to keep the potential risks associated with developing and deploying...

AI Regulation

From Principles to Rules: A Regulatory Approach for Frontier AI

Several jurisdictions are starting to regulate frontier artificial intelligence (AI) systems, i.e. general-purpose AI systems that match or exceed the capabilities present in the most advanced systems. To reduce risks...

AI Regulation

Risk Thresholds for Frontier AI

Frontier artificial intelligence (AI) systems could pose increasing risks to public safety and security. But what level of risk is acceptable? One increasingly popular approach is to...

Policy Advice and Opinion

Response to the RFI Related to NIST's Assignments Under the Executive Order Concerning AI

We welcome the opportunity to respond to the Request for Information (RFI) Related to NIST’s Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning AI. We offer the following submission for your...

AI Lab Policy

Auditing Large Language Models: A Three‐Layered Approach

Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing...

AI Lab Policy

Three Lines of Defense Against Risks from AI

Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks—for economic, legal, and ethical reasons. However, it is not always clear who is responsible for AI risk management.

AI Lab Policy

Coordinated Pausing: An Evaluation-Based Coordination Scheme for Frontier AI Developers

This paper proposes an evaluation-based coordination scheme for situations in which frontier AI developers discover that their models have certain dangerous capabilities.

AI Lab Policy

Open-Sourcing Highly Capable Foundation Models

We evaluate the risks and benefits of open-sourcing, as well as alternative methods for pursuing open-source objectives.

AI Lab Policy

Risk Assessment at AGI Companies: A Review of Popular Risk Assessment Techniques From Other Safety-Critical Industries

There are increasing concerns that AGI would pose catastrophic risks. In light of this, AGI companies need to drastically improve their risk management practices. This paper reviews popular risk assessment techniques...

AI Regulation

National Priorities for Artificial Intelligence (Response to the OSTP Request for Information)

We welcome the opportunity to respond to the OSTP Request for Information on National Priorities for Artificial Intelligence and look forward to future opportunities to provide additional input. We offer the following...

AI Regulation

Response to the NTIA AI Accountability Policy

We welcome the opportunity to respond to the NTIA’s AI Accountability Policy Request for Comment and look forward to future opportunities to provide additional input. We offer the following...

Survey Research

Towards Best Practices in AGI Safety and Governance

A number of leading AI companies, including OpenAI, Google DeepMind, andAnthropic, have the stated goal of building artificial general intelligence (AGI)—AI systems that achieve or exceed human performance...

AI Lab Policy

How to Design an AI Ethics Board

Organizations that develop and deploy artificial intelligence (AI) systems need to take measures to reduce the associated risks. In this paper, we examine how AI companies could design an AI ethics board in a way that reduces...

AI Regulation

Risk management in the Artificial Intelligence Act

The proposed EU AI Act is the first comprehensive attempt to regulate AI in a major jurisdiction. This article analyses Article 9, the key risk management provision in the AI Act. It gives an overview of the regulatory...

Policy Advice and Opinion

Submission to the NIST AI Risk Management Framework

This report gives comments on the Initial Draft of the NIST AI Risk Management Framework (AI RMF). The key recommendations are to put more emphasis on low-probability, high-impact risks, especially catastrophic risks to...

Featured Analysis

Research Posts

Frontier AI Regulation

The next generation of foundation models could threaten public safety. We explore the challenges of regulating frontier AI models and the building blocks of a potential regulatory regime.

Research Posts

New Survey: Broad Expert Consensus for Many AGI Safety and Governance Practices

Our survey of 51 leading experts from AGI labs, academia, and civil society found overwhelming support for many AGI safety and governance practices.

©2024 Centre for the Governance of AI, Inc.
Centre for the Governance of AI, Inc. (GovAI US) is a section 501(c)(3) organization in the USA (EIN 99-4000294). Its subsidiary, Centre for the Governance of AI (GovAI UK) is a company limited by guarantee registered in England and Wales (with registered company number 15883729).
PrivacyCookie Policycontact@governance.ai