National Priorities for Artificial Intelligence (Response to the OSTP Request for Information)

National Priorities for Artificial Intelligence (Response to the OSTP Request for Information)

Response to the OSTP Request for Information by the Centre for the Governance of AI

A response to the OSTP Request for Information on National Priorities for Artificial Intelligence by Jonas Schuett, Markus Anderljung, Lennart Heim, and Elizabeth Seger.

1. Risks from frontier AI models:

  • Foundation models already cause significant harm.
  • Further integrating foundation models into society might lead to systemic risks.
  • As foundation models become more capable, more extreme risks might emerge.

2. Frontier AI regulation:

  • We need specific regulation for frontier AI models.
  • Defining the scope of frontier AI regulation is challenging.
  • Regulators need more visibility into frontier AI development.
  • Frontier AI developers should be required to:
  • Conduct thorough risk assessments informed by evaluations of dangerous capabilities and controllability
  • Engage external experts to scrutinize frontier AI models
  • Follow shared guidelines for how frontier AI models should be deployed based on their assessed risk
  • Monitor and respond to new information on model capabilities
  • Comply with cybersecurity standards.
  • In the future, the deployment and potentially even the development of frontier AI models may require a license.
  • The US Government should support the creation of standards for the development and deployment of frontier AI models.

3. Compute governance:

  • Compute is a particularly promising node to govern frontier AI models.
  • The US Government should grant the Bureau of Industry and Security (BIS) a larger budget and empower it with the tools to effectively enforce the October 7th export controls.
  • Frontier AI developers should be required to report training runs above a certain threshold.
  • Compute providers should be required to have “Know Your Customer (KYC)” processes for compute purchases above some very large size.
  • If companies want access to more compute, they should be subject to additional review requirements (“more compute, more responsibility”).

4. AI and democracy:

  • AI might threaten democracy.
  • “Democratizing AI” does not mean that frontier AI developers should open-source models.
  • “Democratizing AI” is ultimately about ensuring benefits of AI are distributed widely and fairly.

Research Summary

Footnotes
Further reading