The Artefacts of Intelligence: Governing Scientists' Contribution to AI Proliferation

The Artefacts of Intelligence: Governing Scientists' Contribution to AI Proliferation

This DPhil dissertation is about attempts to govern how artificial intelligence (AI) researchers share their work. There is growing concern that the software artefacts built by AI researchers will have adverse impacts on society if made freely available online. AI research is a scientific field, and openly sharing these artefacts is routine and expected, as part of the functioning of the scientific field. Recently, there have been a number of occasions where members of the AI research community have trialled new ways of sharing their work, in response to their concerns that it poses risks to society. The case study follows: the ‘staged release’ of the GPT-2 language model, where more capable models were gradually released; the platform through which researchers and developers could access GPT-3, the successor to GPT-2; and a wave of new ethics regimes for AI conference publications. The study relies on 42 qualitative interviews with members of the AI research community, conducted between 2019 and 2021, as well as many other publicly available sources. The aim is to understand how concerns about risk can become a feature of the way AI research is shared. Major themes are: the relationship between science and society; the relationship between industry AI labs and academia; the interplay between AI risks and AI governance regimes; and how the existing scientific field provides an insecure footing for new governance regimes.

Research Summary

Footnotes
Further reading