Thinking About Risks From AI: Accidents, Misuse and Structure

Thinking About Risks From AI: Accidents, Misuse and Structure

Any technology as potent as AI will also bring new risks, and it is encouraging that many of today’s AI policy initiatives include risk mitigation as part of their mandate. Before risks can be mitigated, though, they must first be understood—and we are only just beginning to understand the contours of risks from AI.

So far, analysts have done a good job outlining how AI might cause harm through either intentional misuse or accidental system failures. But other kinds of risk, like AI’s potentially destabilizing effects in important strategic domains such as nuclear deterrence or cyber, do not fit neatly into this misuse-accident dichotomy. Analysts should therefore complement their focus on misuse and accidents with what we call a structural perspective on risk, one that focuses explicitly on how AI technologies will both shape and be shaped by the (often competitive) environments in which they are developed and deployed. Without such a perspective, today’s AI policy initiatives are in danger of focusing on both too narrow a range of problems and too limited a set of solutions.

Research Summary

Footnotes
Further reading