About
Research
Opportunities
Team
Analysis
Alumni
Updates
Donate

Luca Righetti

Senior Research Fellow

luca.righetti@governance.ai
https://www.lucarighetti.com/

Luca is a Senior Research Fellow at the GovAI, where he leads a team to investigate national security risks from advanced AI systems. He previously worked at Open Philanthropy, the UK Office for AI, and the University of Oxford’s Future of Humanity Institute.

‍

‍

Featured Publications

AI and Biosecurity

Five lessons from having helped run an AI-Biology RCT

In early 2025, AI systems began outperforming biology experts on biology benchmarks – OpenAI’s o3 outperformed...

AI and Biosecurity

Measuring Mid-2025 LLM-Assistance on Novice Performance in Biology

Large language models (LLMs) perform strongly on biological benchmarks, raising concerns that they may help novice actors acquire.

Threat Modeling

Dual-Use AI Capabilities and the Risk of Bioterrorism

Several frontier AI companies test their AI systems for dual-use biological capabilities that might be misused by...

Risk Management

STREAM (ChemBio): A Standard for Transparently Reporting Evaluations in AI Model Reports

Evaluations of dangerous AI capabilities are important for managing catastrophic risks. Public transparency into these evaluations - including...

AI and Biosecurity

Forecasting LLM-Enabled Biorisk and the Efficacy of Safeguards

Capabilities of large language models (LLMs) on several biological benchmarks have prompted excitement about their usefulness for...

Featured Analysis

No items found.

Other Publications

No items found.
©2024 Centre for the Governance of AI, Inc.
Centre for the Governance of AI, Inc. (GovAI US) is a section 501(c)(3) organization in the USA (EIN 99-4000294). Its subsidiary, Centre for the Governance of AI (GovAI UK) is a company limited by guarantee registered in England and Wales (with registered company number 15883729).
PrivacyCookie Policycontact@governance.ai