Germany Moves to Criminalize AI-Generated Deepfake Abuse for the First Time
Digital sexualised violence is rising in Germany, with deepfakes and AI-generated abuse disproportionately targeting women and gender-diverse individuals. Current laws offer limited protection, leaving victims with few legal options even as the scale of the problem grows. Now, a new proposal aims to criminalise the creation of such content for the first time. A recent draft law in Rhineland-Palatinate seeks to introduce prison sentences of up to two years for creating or sharing sexualised deepfakes. Justice Minister Stefanie Hubig, of the SPD, has pushed for the change, arguing that existing legislation only penalises the sharing of harmful content—not its production. Under current rules, victims must prove a violation of personality rights before any action can be taken.
The issue extends beyond deepfakes. Digital sexualised violence includes rape threats, sexist harassment, identity theft, and the misuse of personal data. AI systems also struggle to generate non-female-presenting bodies, leaving non-binary and gender-diverse people at higher risk of erasure and digital abuse. Research shows that 90% of all deepfakes are sexualised, and 99% of those depicted are women. In one extreme case, over three million sexualised deepfakes—mostly of women and children—were created using Elon Musk's AI tool, Grok, in just 11 days. The lack of comprehensive data makes the full scale hard to measure, but experts agree unreported cases far outnumber those recorded. Victims often experience distress, shame, or self-blame, delaying their search for help. Digital violence can also spill into real life, with stalking, surveillance, and fear of leaving homes or workplaces becoming common. Despite the severity, Germany still lacks official statistics on how many people are affected each year.
If passed, the new law would mark a significant shift in how Germany addresses digital sexualised violence. It would close a legal gap by criminalising the creation—not just the distribution—of harmful deepfakes. However, broader challenges remain, including better support for victims and more accurate tracking of cases nationwide.