What Role Should Governments Play in Providing AI Agent Infrastructure?
Private actors are currently well-positioned to provide most of the systems and protocols needed to make AI agents useful and safe. However, governments may still have important roles to play.
Alan Chan, Centre for the Governance of AI, alan.chan@governance.ai
GovAI research blog posts represent the views of their authors rather than the views of the organisation.
Introduction
AI agents are becoming increasingly capable but will need new protocols and systems in order to work effectively and safely. For instance, users need ways to securely delegate payment permissions to agents. To address impersonation and spam, online platforms could block agents that do not follow proper identification protocols. Such protocols and systems are known as agent infrastructure.
Who should build such infrastructure? One might expect governments to play a central role, as they do for physical infrastructure like roads and did for early internet protocols. In this piece, however, I argue that private actors have strong incentives to build and deploy most agent infrastructure, as it will directly increase the value of their products and services. Indeed, they are already building agent infrastructure (e.g. Google's agent interaction protocol).
Nonetheless, government involvement could be warranted in some limited cases. For example, some infrastructure may require access to government systems, such as for verifying government IDs. It is also unclear whether private actors have sufficient incentive to ensure that infrastructure is secure, fairly treats all agents, and is interoperable.
In the rest of this post, I:
- Examine what types of agent infrastructure private actors are likely to build and deploy
- Assess whether this infrastructure will be secure, fairly treat all agents, and be interoperable
- Conclude with recommendations for government action
Assessing the Provision of Agent Infrastructure
When assessing the provision of agent infrastructure, I consider two aspects:
- Will it be created and widely diffused, even without government support?
- Will it by default be secure, fairly treat all agents, and be interoperable?
For example, consider a protocol that allows AI agents to exchange information with each other. The first aspect concerns whether such a protocol will be developed and widely adopted across the industry. The second aspect concerns whether messages sent using this protocol are resistant to interception or modification.
In the following sections, I analyse both aspects of infrastructure provision. First, I examine what types of agent infrastructure private actors are likely to build and adopt. Then, I assess whether privately built infrastructure will satisfy properties in (2).
What Agent Infrastructure Are Private Actors Likely to Build and Adopt?
I first identify what infrastructure private actors are likely to provide on their own, and then consider if there are gaps for governments to fill. I consider three types of agent infrastructure:
- Infrastructure to help users control agents
- Infrastructure to help agents interact with digital actors
- Infrastructure to manage systemic risks from agents
To summarize the conclusions, I expect that companies are especially incentivised to develop infrastructure to (1) help users control agents and (2) help agents interact with digital actors. The reason is that potential users are much more likely to use agents if they can reliably control the agents and if the agents can engage in useful interactions; hence, companies will want to build this infrastructure to better serve user demand.
Users are less likely, on the other hand, to make purchasing decisions on the basis of how agents contribute to systemic risks. Therefore, companies will have weaker incentives to provide this infrastructure.
Of course, companies’ incentives do not entirely determine their behaviour. Nonetheless, where the incentives to build are stronger, I expect government involvement to be less necessary.
Helping users control agents
Users will prefer to use more-controllable agents. Private actors, particularly agent developers, are therefore incentivised to build infrastructure that helps users control their agents. Such infrastructure includes:
- Mechanisms which allow users to grant specific, limited, and revocable permissions to agents, such as read-only access to files or the ability to spend money up to a certain limit
- Sandboxes that preview an agent’s actions before executing them in the world, such as to prevent accidental file deletions
- Undo functions for reversing or mitigating the effects of an agent's actions
- Monitoring systems that flag potential misinterpretations of instructions or unauthorised actions
However, business incentives may not always align with user interests. For example, default spending limits could be too high, and undo functions could be hidden for certain types of actions. Some government action may be necessary to address this incentive mismatch. Analogously, regulation made it easier for credit card users to dispute fraudulent charges.
Helping agents interact with digital actors
Websites and digital platforms
Website and digital platforms will often function better if they can constrain or guide how agents interact with them, for example if they can avoid being flooded with AI content. At the same time, agents will provide more value to users if they can interface effectively with websites and digital platforms. For these reasons, there will be incentives to develop infrastructure to support desirable agent-to-platform interactions and avoid undesirable ones.
For example, websites and digital platforms will likely need infrastructure to distinguish between legitimate and malicious agent activity. These actors already try to block bots from scraping content, carrying out denial of service attacks, and creating spam. However, sophisticated agents could overcome existing anti-bot defences. For example, their image-processing abilities could render CAPTCHAs (even more) ineffective. As such, companies that provide anti-bot services have strong incentives to adapt them to agents. Such infrastructure could, for instance, mislead the agent with irrelevant content or otherwise obfuscate graphical interfaces in human-imperceptible ways.
Since anti-bot infrastructure could frustrate consumer use of agents, websites and digital platforms will also want ways to “allow list” certain agents and productive interactions (e.g. making purchases). For example:
- Although AI-generated content can be useful, users may also value authenticity on social media platforms like Reddit or X. To enable users to distinguish between authentic and AI-generated content, such platforms might require agents to identify themselves.
- To maintain user trust in the integrity of product or service reviews, sites like Yelp or Trustpilot might demand proof of a real-world ID (or even just a social media account) for all reviews, not just those detectable as AI-generated.1
Anti-bot infrastructure for blocking malicious agents could also be used to enforce such requirements. Yet, long-term effectiveness remains uncertain. For example, progress on adversarial robustness could make it harder to mislead agents.
Agents as digital actors
A large part of making agents more useful will be helping them interact with and coordinate with other agents on the internet. Infrastructure that achieves these goals will likely be in the interests of agent developers to create, as in the case of agent-to-platform interactions. Such infrastructure could include:
- More efficient communication protocols between agents
- Reputation systems that distinguish between trustworthy and untrustworthy agents
- Protocols for making credible commitments, such as through smart contracts or staking assets on completion of the action
Since agents that interact with each other must use compatible protocols, some degree of standardisation is needed. Internet standards organisations such as the Internet Engineering Task Force and the World Wide Web Consortium could be well-positioned to fulfill this need, especially since much infrastructure will likely depend upon or extend existing internet standards. For example:
- Mechanisms to authorise agents to perform specific activities could extend existing protocols for delegating access to web services.
- Protocols for identifying agents could extend existing internet communication protocols.
Interactions with government systems
Some limited government involvement may be needed when infrastructure interacts with government systems. For example, a business may want its agent to present an official ID so as to establish trust with suppliers. Private contractors can construct the necessary protocols, but governments still have to provide access to systems for verifying the authenticity of such IDs (e.g. a public key).
Managing systemic risks
Widespread use of agents could pose systemic risks. For example, several businesses might all use an agent from the same AI company to manage inventory. Using the same prompt injection placed on a supplier’s website, an attacker could sabotage or hijack agents across all of these companies.
Infrastructure that could help to manage systemic risks includes:
- Protocols for federated monitoring of agents that aggregate anonymized agent behaviour from different companies so as to find hints of large-scale failures, similar to what the Financial Stability Oversight Council (FSOC) does for financial risks.
- “Circuit breaker” protocols that can temporarily restrict agent activities during emergencies. For example, a stock exchange could temporarily stop agents from trading in financial markets, as they already do in general to prevent extreme price movements.
Private actors could provide some of this infrastructure, especially if systemic failures would damage the reputation of the AI industry as a whole. For example, the Frontier Model Forum has already facilitated an information-sharing agreement between frontier AI companies.
However, the case for government involvement is stronger here. In the financial industry, for example, the FSOC was only created after the 2008 financial crisis. The SEC standardised circuit breaker protocols after the 1987 stock market crash. Competition concerns could motivate less information-sharing and industry-wide cooperation than is needed to avoid such crises. Finally, even when monitoring infrastructure exists, the data may not be accessible to academic or independent researchers.
How will Privately Built Infrastructure Function?
Even if private actors will build most agent infrastructure, we still need to assess whether this infrastructure will function well. Drawing on lessons from internet infrastructure, I focus on three properties:
- Security: The degree to which the infrastructure functions securely, such as by preventing attackers from undermining essential functions or leaking information.
- Neutrality: The extent to which the infrastructure works well for all agents. Neutrality is not the same as infrastructure being widespread. For instance, a communication protocol could be faster for certain agents but still work for all agents.
- Interoperability: The extent to which the infrastructure can interact with other infrastructure that fulfills a similar function. For example, agent reputation systems run by different platforms could allow profiles to be transferred between them.
Security
Agent infrastructure could have security vulnerabilities that cause harmful consequences or discourage beneficial uses of agents. For example, monitoring and oversight mechanisms could enable surveillance or leak data about an agent’s activities to attackers. ID linking infrastructure could also allow attackers to impersonate certain individuals.
In theory, private actors could underinvest in security because it may not generate immediate economic returns. Indeed, users may not be security-conscious, and security may even conflict with other factors in a purchasing decision, like ease of use or efficiency. The often-high cost of security exacerbates these factors.
However, private actors are also responsible for many security improvements. For example, private actors championed encryption of website communications through HTTPS. One potential motivation for improving security is that user demand for it increases when high-profile security failures occur. Another potential reason is decreasing costs for certain kinds of security. For example, encryption used to incur a relatively large computational overhead that has decreased due to better processors.
Similarly, there are reasons to believe that at least some private infrastructure developers will take security seriously:
- Major technology companies developing AI agents, like Google and Microsoft, stake much of their reputation on security.
- Digital platforms that will use agent infrastructure, like Facebook, Uber, and Amazon, emphasise the importance of security.
- Influential non-profits advocate for improved digital security, such as the Electronic Frontier Foundation and Mozilla.
- Many organisations specialise in security services (such as Symantec or RSA) or security-first products (such as Cloudflare, Signal, or Proton Mail).
- Internet standards organisations like the IETF and W3C emphasise the importance of security, including in newer internet standards.
Although security will not be perfect, private actors will likely include security features and improve them in response to incidents. However, the possibility of severe security failures suggests that governments could at least monitor security levels, particularly for agent infrastructure used by government agencies.
Neutrality
Private actors could build agent infrastructure to favour certain agents over others. For example:
- Dedicated interfaces for digital services could have higher rate limits for preferred agents.
- Anti-bot infrastructure could secretly exempt some agents.
- Communication protocols could prioritise messages sent by specific agents.
Although not necessarily undesirable, preferential treatment could also distort competition among agent developers, particularly if the infrastructure provider also develops its own agents or receives benefits from certain agent companies. Because it is not yet clear whether such favouritism will arise, monitoring for it may be warranted.
Interoperability
Different private actors might build competing infrastructure that serves similar functions, but which might not be interoperable. For example:
- Identification or certification systems might only work for agents developed by a particular digital platform, analogous to the inability to transfer a business profile from Google Maps to Yelp.
- Communication platforms for agents might block the transfer of communications history to other ecosystems.
As with neutrality, there is no evidence yet of such problems. However, private actors might have incentives to reduce interoperability in order to lock users into their platforms or services. Given these incentives, continued monitoring of the situation is warranted.
Is Government Action Warranted?
The analysis identifies several potential gaps that governments might fill in the provision of agent infrastructure:
- Private infrastructure may not sufficiently prioritise user interests when they diverge from company financial interests. These divergences may be most likely to arise with regard to the security and neutrality of privately built infrastructure.
- It is possible that interoperability issues will emerge between different agent infrastructure protocols and systems.
- For users to be able to authorise their agents to present a government ID, governments must provide access to systems for verifying the authenticity of such IDs (e.g. a public key).
- Infrastructure to manage systemic risks may also be insufficient.
Of these gaps, only the third – providing access to ID verification systems – clearly requires government involvement. This action is relatively low-cost where digital government ID systems already exist and could facilitate many useful and trustworthy agent interactions.
For the first, second, and fourth points, government monitoring is warranted, but significant further action is not yet. This monitoring could include:
- Consumer protection agencies testing new releases of agents and agent infrastructure for user control, neutrality, and interoperability
- Cybersecurity agencies helping organisations identify potential security vulnerabilities in their use of agents
- Statistics or economic agencies gathering information about agent adoption across different sectors
- Critical infrastructure agencies gathering information about agent use in critical infrastructure
If evidence emerges that privately built infrastructure is failing in these areas, governments could:
- Convene companies to facilitate the development of standards
- Create regulation to support consumer welfare, security, neutrality, or interoperability
- Establish trusted institutions to provide the necessary infrastructure
Limitations
The analysis has several limitations to consider.
First, I have assumed that private actors respond to incentives in an approximately rational way. However, they may not necessarily do so in practice. For example, organisational inertia at large digital platforms could hinder their response to market dynamics, delaying infrastructure development.
Second, even if private actors fail to provide certain infrastructure, extensive government intervention may nevertheless not be effective or beneficial. One reason is that governments may simply lack the technical capacity to build infrastructure. Alternatively, users may not adopt government-built infrastructure due to concerns about surveillance or censorship. Furthermore, lobbying efforts could reduce the effectiveness or accessibility of government-built infrastructure.
Third, I have not addressed digital access disparities. Areas with minimal digital access could struggle to deploy agents, let alone agent infrastructure.
Conclusion
As with internet infrastructure, private actors are likely to provide much agent infrastructure. Governments can best support such efforts by:
- Providing access to systems for verifying the authenticity of government IDs, for protocols that allow agents to present such IDs
- Monitoring whether infrastructure sufficiently caters to user interests and is secure, neutral, and interoperable
If governments discover that privately built infrastructure is insufficient, potential responses include convening companies to develop standards, regulating industry, or establishing institutions to provide the necessary infrastructure. Light-touch actions such as convening could still be worthwhile even if privately built infrastructure is likely to be sufficient. However, significant intervention does not currently seem to be warranted.
Future work could investigate:
- Whether privately provided infrastructure is likely to have other adverse side effects, drawing upon lessons from other domains like the internet
- How government intervention could worsen outcomes even when market failures are present
- What evidence would warrant more extensive government intervention
- How governments could respond to evidence of infrastructure that is anti-consumer or fails to be secure, neutral, or interoperable
Acknowledgments
Thanks go to the following people for helpful conversations and feedback that positively shaped this work: Stephen Clare, Ben Garfinkel, Jam Kraprayoon, Markus Anderljung, Sam Manning, Hamish Hobbs, Sophie Williams, Vinay Hiremath, John Halstead, and Matthew van der Merwe.
Footnotes
1 - Companies could hire third parties to engage in astroturfing. However, those third parties would still need to associate the reviews with some ID. Review platforms could notice an anomalously high number of reviews coming from the same set of IDs.