About
Research
Opportunities
Team
Analysis
Alumni
Updates
Donate

Lennart Heim

Adjunct Fellow

lennart.heim@governance.ai
https://heim.xyz/

Lennart Heim is an associate information scientist at RAND and a professor of policy analysis at the Pardee RAND Graduate School.

Featured Publications

Political Science and International Relations

Options and Motivations for International AI Benefit Sharing

Advanced AI systems could generate substantial economic and other societal benefits, but these benefits may not be widely shared by default. For a range of reasons, a number of prominent actors and institutions have called for...

Technical AI Governance

IDs for AI Systems

AI systems are increasingly pervasive, yet information needed to decide whether and how to engage with them may not exist or be accessible. A user may not be able to verify whether a system has certain safety certifications...

Technical AI Governance

Visibility into AI Agents

Increased delegation of commercial, scientific, governmental, and personal activities to AI agents—systems capable of pursuing complex goals with limited supervision—may...

AI Regulation

Training Compute Thresholds: Features and Functions in AI Regulation

Regulators in the US and EU are using thresholds based on training compute--the number of computational operations used in training--to identify general-purpose artificial intelligence (GPAI) models that may pose risks of...

AI Regulation

Societal Adaptation to Advanced AI

Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse. This approach becomes less feasible as the number of developers of advanced AI grows.

AI Regulation

Increased Compute Efficiency and the Diffusion of AI Capabilities

Training advanced AI models requires large investments in computational resources, or compute. Yet, as hardware innovation reduces the price of compute and algorithmic advances make its use more efficient, the cost of training...

AI Regulation

Governing Through the Cloud

Compute providers can play an essential role in a regulatory ecosystem via four key capacities: as securers, safeguarding AI systems and critical infrastructure; as record keepers, enhancing visibility for policymakers...

AI Regulation

Computing Power and the Governance of Artificial Intelligence

Recent AI progress has largely been driven by increases in the amount of computing power used to train new models. Governing compute could be an effective way to achieve AI policy goals, but could also introduce new risks.

Law and Policy

Oversight for Frontier AI through a Know-Your-Customer Scheme for Compute Providers

To address security and safety risks stemming from highly capable artificial intelligence (AI) models, we propose that the US government should ensure compute providers implement Know-Your-Customer (KYC) schemes.

International institutions and agreements

International Governance of Civilian AI

This report describes trade-offs in the design of international governance arrangements for civilian artificial intelligence (AI) and presents one approach in detail.

AI Regulation

National Priorities for Artificial Intelligence (Response to the OSTP Request for Information)

We welcome the opportunity to respond to the OSTP Request for Information on National Priorities for Artificial Intelligence and look forward to future opportunities to provide additional input. We offer the following...

AI Regulation

Response to the NTIA AI Accountability Policy

We welcome the opportunity to respond to the NTIA’s AI Accountability Policy Request for Comment and look forward to future opportunities to provide additional input. We offer the following...

Survey Research

Towards Best Practices in AGI Safety and Governance

A number of leading AI companies, including OpenAI, Google DeepMind, andAnthropic, have the stated goal of building artificial general intelligence (AGI)—AI systems that achieve or exceed human performance...

AI Regulation

Response to the UK's Future of Compute Review

We are pleased to see the publication of the UK’s Future of Compute Review. However, we also believe there is a significant missed opportunity: the review does not address how to ensure that compute is used responsibly or how...

Policy Advice and Opinion

GovAI Response to the Future of Compute Review - Call for Evidence

We welcome the opportunity to respond to the Future of Compute Review’s call for evidence...Our response focuses on the future of compute used for Artificial Intelligence (AI). In particular, we emphasise the risks posed by...

Policy Advice and Opinion

Submission to the Request for Information (RFI) on Implementing Initial Findings and Recommendations of the NAIRR Task Force

This report gives comments on the interim report of the National AI Research Resource (NAIRR) Task Force. The key recommendations are: Provides researchers with access to pre-trained models by providing infrastructure...

Featured Analysis

Research Posts

What Increasing Compute Efficiency Means for the Proliferation of Dangerous Capabilities

Falling development costs allow more and more groups to reproduce existing AI capabilities. But falling costs also benefit large compute investors, helping them maintain their leads by pushing...

Research Posts

What Should the Global Summit on AI Safety Try to Accomplish?

The summit could produce a range of valuable outcomes. It may also be a critical and fleeting opportunity to bring China into global AI governance.

Commentary

Goals for the Second AI Safety Summit

The second AI Safety Summit is an opportunity to reinforce the world’s commitment to an ambitious summit series.

Research Posts

Computing Power and the Governance of AI

Recent AI progress has largely been driven by increases in the amount of computing power used to train new models. Governing compute could be an effective way to achieve AI policy goals...

Research Posts

New Survey: Broad Expert Consensus for Many AGI Safety and Governance Practices

Our survey of 51 leading experts from AGI labs, academia, and civil society found overwhelming support for many AGI safety and governance practices.

Research Posts

Compute Funds and Pre-trained Models

The US National AI Research Resource should provide structured access to models, not just data and compute

©2024 Centre for the Governance of AI, Inc.
Centre for the Governance of AI, Inc. (GovAI US) is a section 501(c)(3) organization in the USA (EIN 99-4000294). Its subsidiary, Centre for the Governance of AI (GovAI UK) is a company limited by guarantee registered in England and Wales (with registered company number 15883729).
PrivacyCookie Policycontact@governance.ai