Proposing an International Governance Regime for Civilian AI

A new report proposes creating an international body that can certify compliance with international standards on civilian AI.


Robert Trager, Sarah Kreps, and Ben Harack

This post summarises a new report, “International Governance of Civilian AI: A Jurisdictional Certification Approach.” You can read the full report here.

GovAI research blog posts represent the views of their authors, rather than the views of the organisation.

Many jurisdictions have begun to develop their own approaches to regulating AI, in response to concerns that range from bias to misinformation to the possibility of existential risks from advanced AI. 

However, without international coordination, these purely national and regional approaches will not be enough. Many of the risks posed by AI are inherently international, since they can produce harms that spill across borders. AI supply chains and product networks also span many countries. We therefore believe that a unified international response to the risks posed by AI will be necessary.

In recognition of this need, we have developed a proposal – outlined in a new report – for an international governance body that can certify compliance with international standards on civilian AI.1 We call this proposed body the International AI Organization (IAIO).

Our proposal for the IAIO follows a jurisdictional certification approach, modelled on the approach taken by other international bodies such as the International Civilian Aviation Organization (ICAO), the International Maritime Organization (IMO) and the Financial Action Task Force (FATF). Under this approach, the international body certifies that the regulatory regimes adopted within individual jurisdictions meet international standards. Jurisdictions that fail to receive certifications (e.g. because their regulations are too lax or they fail to enforce them) are excluded from valuable trade relationships – or otherwise suffer negative consequences.

Our report outlines the proposal in more detail and explains how it could be put into practice. We suggest that even a very partial international consensus on minimum regulatory standards – perhaps beginning with just a few major players – could be channeled into an institutional framework designed to produce increasingly widespread compliance. An initial set of participating states could establish the IAIO and – through the IAIO – arrive at a set of shared standards and a process for certifying that a state’s regulatory regime meets these standards. One of the standards would be a commitment to ban the import of goods that integrate AI systems from uncertified jurisdictions. Another standard could be a commitment to ban the export of AI inputs (such as specialised chips) to uncertified jurisdictions. The participating states’ trade policies would thereby incentivise other states to join the IAIO themselves and receive certifications.

We believe that the IAIO could help to mitigate many of AI’s potential harms, from algorithmic bias to the longer-term security threats. Our hope is that it can balance the need to prevent harmful forms of proliferation against the imperatives to spread the benefits of the technology and give voice to affected communities around the globe.

Footnotes

1 - We exclude AI systems built by states for military or intelligence purposes, since we expect them to require a distinct regulatory approach.

Further reading