Labeling of AI Agent Activity in Article 50 of the EU AI Act
Executive summary
● The online activities of AI agents could distort human beliefs and behaviors. For example, humans could mistake synthetic likes and shares on social media for genuine human opinion.
● Labeling such activity as AI-generated could help society understand and address its risks, such as by helping users be more conscious of attempts to influence them.
● Article 50 (Art. 50) of the EU AI Act requires companies to label certain AI activity and is expected to be in force from 2 August 2026. However, it is unclear whether actions from AI agents are to be labeled. An upcoming Code of Practice (“Code”) will be an opportunity to clarify this issue by describing voluntary measures that, if followed, would constitute evidence of compliance with certain parts of Art. 50.1
● I argue that Art. 50 likely requires:
○ Web requests (e.g. online payments) and browser actions from AI agents to be labeled. Art. 50 requires labelling of AI outputs and actions plausibly count as such. Furthermore, labelling them would serve the Article’s transparency goals, such as by helping humans distinguish AI activity from genuine human opinion. Finally, labeling of web requests and browser actions is feasible through metadata.
○ Labels to be verifiable. It should be possible to verify both who created a label (e.g. a particular developer) and whether it has been tampered with, because Art. 50 requires labeling to be “effective, interoperable, robust and reliable”.
● To operationalize these considerations, I propose potential text for the Code in this article’s Appendix.
● Even labeling all AI actions (similarly, all AI-generated content) as synthetic could fail to provide sufficient transparency if labels are ignored or do not convey useful information, for example if the majority of online content comes to be AI-generated.



