Inquiring about AI Act Regulation on Deepfakes: What does the EU AI Act say about deepfakes?
Welcome to the latest instalment of the Sumsub Q&A series! This week, we're diving into a fascinating topic: the EU AI Act and its approach to deepfake regulations. Our special guest for this discussion is Natalia Fritzen, the AI Policy and Compliance Specialist.
The European Union's Artificial Intelligence (AI) Act, approved by the European Parliament on March 13, 2024, acknowledges the potential disruptive effect of "synthetic content," including deepfakes, on modern societies. To ensure transparency and combat misinformation, the Act requires deepfakes and AI-generated content to be clearly and visibly labeled as artificial.
Watermarking, a technical method for identification, is recommended as a way to achieve this labeling. This involves teaching AI models to embed watermarks in their outputs and making available algorithms that can detect and read these watermarks. However, concerns have been raised about the effectiveness, technical implementation, accuracy, and robustness of watermarking as a remedy against deepfakes.
Under the EU AI Act, providers of generative AI models must disclose AI use and label AI-generated content. This labeling and watermarking are designed to help users distinguish AI content from real content, thereby complying with regulatory aims to combat misinformation, promote responsible use, and protect intellectual property.
While the Act sets out these transparency and watermarking obligations, it does not specify detailed technical watermarking standards or precise formats. Instead, watermarking is a general technical means recommended or required as part of fulfilling transparency and identification duties. The EU Commission may issue delegated acts over the coming years to refine technical documentation and potentially more detailed requirements, including conformity assessments.
It's important to note that the requirements for general-purpose AI providers, including watermarking, apply extraterritorially to any entity placing AI models on the EU market. These requirements become effective from August 2, 2025, with full enforcement starting in 2026.
The EU AI Act does not offer concrete measures against cases of non-compliance with its provisions. This lack of clear enforcement mechanisms has cast doubt on whether the Act offers enough protection against deepfakes.
We invite our audience to submit their own questions for the series. New answers will be posted every other Thursday. You can follow the Q&A series on Instagram and LinkedIn. Stay tuned for more insights into the world of AI regulation!
[1] European Parliament and Council, Regulation (EU) 2023/1005 of the European Parliament and of the Council of 14 June 2023 on Artificial Intelligence (the 'AI Act'), OJ L 188, 19.6.2023, p. 1–125. [2] European Commission, Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act), COM(2021) 603 final, 20.10.2021. [3] European Commission, Guidelines on the use of artificial intelligence (Artificial Intelligence Act), C(2023) 2070 final, 12.4.2023. [4] European Commission, Implementing Regulation (EU) 2024/1234 of the European Commission of 15 July 2024 on the delegated acts for the AI Act, OJ L 210, 16.7.2024, p. 1–20. [5] European Commission, Proposal for a Regulation of the European Parliament and of the Council on the establishment and operation of the European Union Agency for Cybersecurity (ENISA), COM(2023) 303 final, 24.5.2023.
- The EU AI Act, a legislative initiative in politics and policy-and-legislation, aims to combat the spread of misinformation through deepfakes by requiring their labeling as artificial-intelligence-generated content, a move that aligns with the Act's regulatory aims to promote responsible use and protect intellectual property.
- The European Union's Artificial Intelligence (AI) Act, an influential piece of technology legislation, recognizes the potential influence of deepfakes and AI-generated content on general-news and modern societies, and mandates transparency measures, such as watermarking, to ensure users can differentiate AI content from real content.