AI and Bomb Plots: Distinguishing Potential Effects from Language Models

AI and Bomb Plots: Distinguishing Potential Effects from Language Models

This work represents the views of its authors, rather than the views of the organisation, and does not constitute legal advice. GovAI technical reports have received extensive feedback but have not gone through formal peer review.


Future AI systems could impact the risk of terrorist bomb plots. This technical report analyses two channels by which this risk could increase: (1) future AI could increase the technical skill of bomb plotters by providing higher quality assistance than what they could otherwise access, and (2) future AI could make plotters harder to detect. We note that detection could decline if future AI systems provide higher quality operational advice on how to avoid getting caught, or if they reduce the need for terrorists to contact others for support. A preliminary review of historical cases studies of terrorism finds precedent for the relevance of these channels. We show that a large fraction of bomb plots are foiled by law enforcement, making the detection mechanism important and potentially understudied in current AI safety evaluations. We also show parallels between mentorship that some plotters currently receive from other terrorists via the internet and several aspects of emerging AI capabilities, such as multimodality, troubleshooting, and tailored guidance. We conclude that assessments of AI risk should account not just for potentially dual-use technical knowledge, but also for consequences on the ability for threat actors to evade current law enforcement detection strategies.

Research Summary

Footnotes
Further reading