Lennart Heim is an associate information scientist at RAND and a professor of policy analysis at the Pardee RAND Graduate School.
Advanced AI systems could generate substantial economic and other societal benefits, but these benefits may not be widely shared by default. For a range of reasons, a number of prominent actors and institutions have called for...
AI systems are increasingly pervasive, yet information needed to decide whether and how to engage with them may not exist or be accessible. A user may not be able to verify whether a system has certain safety certifications...
Increased delegation of commercial, scientific, governmental, and personal activities to AI agents—systems capable of pursuing complex goals with limited supervision—may...
Regulators in the US and EU are using thresholds based on training compute--the number of computational operations used in training--to identify general-purpose artificial intelligence (GPAI) models that may pose risks of...
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse. This approach becomes less feasible as the number of developers of advanced AI grows.
Training advanced AI models requires large investments in computational resources, or compute. Yet, as hardware innovation reduces the price of compute and algorithmic advances make its use more efficient, the cost of training...
Compute providers can play an essential role in a regulatory ecosystem via four key capacities: as securers, safeguarding AI systems and critical infrastructure; as record keepers, enhancing visibility for policymakers...
Recent AI progress has largely been driven by increases in the amount of computing power used to train new models. Governing compute could be an effective way to achieve AI policy goals, but could also introduce new risks.
To address security and safety risks stemming from highly capable artificial intelligence (AI) models, we propose that the US government should ensure compute providers implement Know-Your-Customer (KYC) schemes.
This report describes trade-offs in the design of international governance arrangements for civilian artificial intelligence (AI) and presents one approach in detail.
We welcome the opportunity to respond to the OSTP Request for Information on National Priorities for Artificial Intelligence and look forward to future opportunities to provide additional input. We offer the following...
We welcome the opportunity to respond to the NTIA’s AI Accountability Policy Request for Comment and look forward to future opportunities to provide additional input. We offer the following...
A number of leading AI companies, including OpenAI, Google DeepMind, andAnthropic, have the stated goal of building artificial general intelligence (AGI)—AI systems that achieve or exceed human performance...
We are pleased to see the publication of the UK’s Future of Compute Review. However, we also believe there is a significant missed opportunity: the review does not address how to ensure that compute is used responsibly or how...
We welcome the opportunity to respond to the Future of Compute Review’s call for evidence...Our response focuses on the future of compute used for Artificial Intelligence (AI). In particular, we emphasise the risks posed by...
This report gives comments on the interim report of the National AI Research Resource (NAIRR) Task Force. The key recommendations are: Provides researchers with access to pre-trained models by providing infrastructure...
Falling development costs allow more and more groups to reproduce existing AI capabilities. But falling costs also benefit large compute investors, helping them maintain their leads by pushing...
The summit could produce a range of valuable outcomes. It may also be a critical and fleeting opportunity to bring China into global AI governance.
The second AI Safety Summit is an opportunity to reinforce the world’s commitment to an ambitious summit series.
Recent AI progress has largely been driven by increases in the amount of computing power used to train new models. Governing compute could be an effective way to achieve AI policy goals...
Our survey of 51 leading experts from AGI labs, academia, and civil society found overwhelming support for many AGI safety and governance practices.
The US National AI Research Resource should provide structured access to models, not just data and compute