Open Problems in Cooperative AI

Allan Dafoe has recently co-authored a major paper developing the framework of “cooperative AI” and calling for further research into a set of open problems involving cooperation with and between AI systems. The paper resulted in the creation of a $15 million foundation to support work on these problems.

Read the paper "Open Problems in Cooperation" here. You can also read about the work of the Cooperative AI Foundation, whose work was inspired by the paper, on their website. The following extended abstract summarizes the paper's key message:

Problems of cooperation—in which agents have opportunities to improve their joint welfare but are not easily able to do so—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as driving on highways, scheduling meetings, and working collaboratively—to our global challenges—such as peace, commerce, and pandemic preparedness. Human civilization and the success of the human species depends on our ability to cooperate.

Advances in artificial intelligence pose increasing opportunity for AI research to promote human cooperation. AI research enables new tools for facilitating cooperation, such as language translation, human-computer interfaces, social and political platforms, reputation systems, algorithms for group decision-making, and other deployed social mechanisms; it will be valuable to have explicit attention to what tools are needed, and what pitfalls should be avoided, to best promote cooperation. AI agents will play an increasingly important role in our lives, such as in self-driving vehicles, customer assistants, and personal assistants; it is important to equip AI agents with the requisite competencies to cooperate with others (humans and machines). Beyond the creation of machine tools and agents, the rapid growth ofAI research presents other opportunities for advancing cooperation, such as from research insights into social choice theory or the modeling of social systems.

The field of artificial intelligence has an opportunity to increase its attention to this class of problems, which we refer to collectively as problems in Cooperative AI. The goal would be to study problems of cooperation through the lens of artificial intelligence and to innovate in artificial intelligence to help solve these problems. Whereas much AI research to date has focused on improving the individual intelligence of agents and algorithms, the time is right to also focus on improving social intelligence: the ability of groups to effectively cooperate to solve the problems they face.

AI research relevant to cooperation has been taking place in many different areas, including in multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural language processing, and the construction of social tools and platforms. Our recommendation is not merely to construct an umbrella term for these areas, but rather to encourage focused research conversations, spanning these areas, focused on cooperation. We see opportunity to construct more unified theory and vocabulary related to problems of cooperation. Having done so, we think AI research will be in a better position to learn from and contribute to the broader research program on cooperation spanning the natural sciences, social sciences, and behavioural sciences.

Our overview comes from the perspective of authors who are especially impressed by and immersed in the achievements of deep learning(1) and reinforcement learning (2). From that perspective, it will be important to develop training environments, tasks, and domains that can provide suitable feedback for learning and in which cooperative capabilities are crucial to success, non-trivial, learnable, and measurable. Much research in multi-agent systems and human-machine interaction will focus on cooperation problems in contexts of pure common interest. This will need to be complemented by research in mixed-motives contexts, where problems of trust, deception, and commitment arise. Machine agents will often act on behalf of particular humans and will impact other humans; as a consequence, this research will need to consider how machines can adequately understand human preferences, and how best to integrate human norms and ethics into cooperative arrangements. Researchers building social tools and platforms will have other perspectives on how best to make progress on problems of cooperation, including being especially informed by real-world complexities. Areas such as trusted hardware design and cryptography may be relevant for addressing commitment problems and cryptography. Other aspects of the problem will benefit from expertise from other sciences, such as political science, law, economics, sociology, psychology, and neuroscience. We anticipate much value in explicitly connecting AI research to the broader scientific enterprise studying the problem of cooperation and to the broader effort to solve societal cooperation problems.

We recommend that “Cooperative AI” be given a technically precise, problem-defined scope; otherwise, there is a risk that it acquires an amorphous cloud of meaning, incorporating adjacent (clusters of) concepts such as aligned AI, trustworthy AI, and beneficial AI. Cooperative AI, as scoped here, refers to AI research trying to help individuals, humans and machines, to find ways to improve their joint welfare.For any given situation and set of agents, this problem is relatively well defined and unambiguous. The Scope section elaborates on the relationship to adjacent areas. Conversations on Cooperative AI can be organized in part in terms of the dimensions of cooperative opportunities. These include the strategic context, the extent of common versus conflicting interest, the kinds of entities who are cooperating, and whether the researchers are focusing on the cooperative competence of individuals or taking the perspective of a social planner. Conversations can also be focused on key capabilities necessary for cooperation, including:

  • Understanding of other agents, their beliefs, incentives, and capabilities.
  • Communication between agents, including building a shared language and overcoming mistrustand deception.
  • Constructing cooperative commitments, so as to overcome incentives to renege on a cooperative arrangement.
  • Institutions, which can provide social structure to promote cooperation, be they decentralizedand informal, such as norms, or centralized and formal, such as legal systems.

Just as any area of research can have downsides, so is it prudent to investigate the potential downsides of research on Cooperative AI. Cooperative competence can be used to exclude others, some cooperative capabilities are closely related to coercive capabilities, and learning cooperative competence can be hard to disentangle from coercion and competition. An important aspect of this research, then, will be investigating potential downsides and studying how best to anticipate and mitigate them. The paper is structured as follows. We offer more motivation in the section "Why Cooperative AI?" We then discuss several important dimensions of "Cooperative Opportunities." The bulk of our discussion is contained in the "Cooperative Capabilities" section, which we organize in terms of Understanding, Communication, Commitment, and Institutions. We then reflect on "The Potential Downsides of Cooperative AI" and how to mitigate them. Finally, we conclude.

To read the full "Open Problems in Cooperative AI" report PDF, click here.

Footnotes

[1] Terrence J Sejnowski. The unreasonable effectiveness of deep learning in artificial intelligence. Proceedings of the National Academy of Sciences, 117(48):30033–30038, 2020.doi:10.1073/pnas.1907373117.

[2] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MITPress, Cambridge, MA, second edition, 2018.

Further reading