Estimating Global Yearly Cybercrime Damage Costs

Estimating Global Yearly Cybercrime Damage Costs

A Baseline for Frontier AI Risk Assessment

This work represents the views of its authors, rather than the views of the organization, and does not constitute legal advice. GovAI technical reports have received extensive feedback, but have not gone through formal peer review.


AI companies and governments have raised concerns about frontier AI systems enabling cyber-attacks and cybercrime. Some have expressed interest in defining so-called capability thresholds: If there is evidence of AI models creating severe harm via cybercrime, then concrete mitigations should be triggered. But it remains unclear where to draw the line. Should it be when AI is believed to increase cybercrime by 10%, or double it, or something else? To make that judgment, we first need to know the scale of cybercrime today.

A fundamental challenge is that current estimates of global cybercrime damages vary widely, from tens of billions to tens of trillions of dollars, with little systematic evaluation of their reliability. The existing literature notes that many headline estimates from cybersecurity vendors may be inflated due to commercial incentives, while others rely on questionable methodologies. This uncertainty makes it difficult for AI developers to assess at what point future capabilities might breach their damage thresholds – or for policymakers to allocate resources effectively.

This report establishes a more rigorous baseline by surveying 27 different estimates and critically evaluating their methodologies. We adopt a taxonomy of cybercrime that distinguishes cyber-dependent crimes (e.g. hacking of computers) from cyber-assisted crimes (e.g. internet-enabled fraud). Our focus is on quantifiable economic damages:

  • Direct losses: money stolen or extorted
  • Response costs: incident remediation and investigation
  • Defense spending: prevention measures

We exclude harder-to-measure indirect costs such as intellectual property losses and reputational damage, and additional social harms such as national security considerations.

We construct a composite estimate from three independent sources of evidence: (1) a nationally representative business victimisation survey from the UK government, scaled globally; (2) individual victimisation data from a US academic survey, scaled globally; and (3) global cybersecurity spending figures as a proxy for defense costs. Large-sample victimisation surveys capture losses directly from victims, which avoids both the reporting bias common in estimates and the heavy modeling assumptions required by approaches relying on macroeconomic estimates.

We find:

  • Individual direct losses: ~$200 billion annually
  • Business direct and response costs: ~$200 billion annually
  • Additional defense spending: ~$100 billion annually
  • Total: ~$500 billion annually
    • 90% confidence interval: $100 billion to $1 trillion


Implications for AI Risk Management

If this report’s estimates are correct, a 20% increase would cross commonly cited damage thresholds. If current cybercrime damages approach $500 billion annually, an AI-driven increase of ~20% could add $100 billion or more, which would reach thresholds some companies identify as warranting additional risk mitigations. This is a relatively modest increase that could occur through efficiency gains in existing attack methods without requiring qualitatively new AI capabilities.

However, data on cybercrime damage is too incomplete and ambiguous for incremental AI-driven increases to be directly detectable. There is no single data source that accurately captures total damages due to pervasive under-reporting, evolving crime definitions, and measurement inconsistencies. This means that even after-the-fact estimates of models’ contributions to cybercrime damage will necessarily be uncertain and require the use of multiple forms of evidence.

Defensive adaptation also complicates net impact assessments. AI can substantially enhance defensive capabilities, not just offensive ones. The net effect of AI could be positive on the cybercrime landscape, but future modeling and data collection is needed to confirm this.

Companies may also have regulatory obligations to consider potential cybercrime damage enabled by their AI systems. Companies’ safety frameworks typically focus on discrete events (e.g. individual cyberattacks) that cause large-scale harm. Cybercrime causes cumulative harm through many distributed incidents rather than a single catastrophic event. Yet regulatory obligations may not require this instantaneous framing. The EU AI Act’s Code of Practice identifies “enabling large-scale sophisticated cyber-attacks” as a systemic risk requiring assessment and mitigation, which could encompass AI-enabled scaling of many smaller attacks, not just single catastrophic events.

Areas for Future Research

  • Developing indicators beyond aggregate cost estimates, such as monitoring for AI-generated code in malware samples, tracking criminal forum discussions of AI tool adoption, and observing labor market changes in fraud operations
  • Understanding the net impact of AI defenses across potential victims, including under-resourced government agencies, critical infrastructure providers, and organizations in developing countries
  • Developing harmonized data across multiple countries to reduce reliance on GDP scaling from Western nations


Ultimately, although limitations remain, this report narrows plausible estimates of cybercrime damage from spans of multiple orders of magnitude to a more confident baseline. Our systematic methodology provides a foundation for more informed decision-making about AI-related cybercrime risks.

Research Summary

Footnotes
Further reading