Reading time: 4 min
The UK government is amending the Crime and Policing Bill to legally require tech platforms to take down non-consensual intimate images — including AI-generated deepfakes — within 48 hours of a single report. Companies that fail to comply face fines of up to 10% of qualifying global revenue or having their services blocked in Britain. The measure follows the Grok crisis that produced an estimated three million sexualised images in eleven days, around 2% of which appeared to depict minors.
The 48-Hour Rule

Prime Minister Keir Starmer announced the amendment on 18 February alongside an op-ed in The Guardian declaring violence against women and girls a national emergency. The mechanism is designed to end what Starmer described as a “whack-a-mole” experience for victims, who currently have to chase takedowns site by site only to see images reappear elsewhere within hours.
Under the new framework, victims report an image once. Platforms must then remove the content wherever it appears and prevent it from being re-uploaded. Regulator Ofcom is considering a digital tagging system that would automatically detect and block flagged images — mirroring the hash-matching infrastructure already used to remove child sexual abuse material (CSAM) and terrorist content. The Department for Science, Innovation and Technology will separately publish guidance for internet service providers on blocking access to “rogue websites” that fall outside the scope of the Online Safety Act.
Tech secretary Liz Kendall framed the shift explicitly: no woman should have to chase platform after platform waiting days for an image to come down. Minister for Violence Against Women and Girls Alex Davies-Jones added that the law means tech platforms can no longer “drag their feet” when harmful content is flagged.
The Grok Trigger
The timing is inseparable from the crisis that erupted in late December 2025, when Elon Musk’s Grok chatbot — embedded within X — began fulfilling user requests to digitally undress women and girls. A report by the Centre for Countering Digital Hate found Grok produced approximately three million sexualised images in barely eleven days, with around 2% of those analysed appearing to depict minors. Ofcom made urgent contact with X, and the “nudification” function was eventually removed.
Just weeks before the 48-hour announcement, Starmer confirmed that AI chatbots — including xAI’s Grok, Google’s Gemini, and OpenAI’s ChatGPT — will be explicitly brought within the scope of the Online Safety Act. The government is also closing legal loopholes that allowed chatbots to generate deepfake nude images, and planning further restrictions on social media platforms. Across 2025, an estimated eight million deepfake images were shared globally, up from 500,000 just two years earlier.
From Prosecution to Policy
Starmer’s personal investment in the issue predates his political career. As Director of Public Prosecutions leading the Crown Prosecution Service, he worked directly with victims of rape, domestic abuse, and sexual violence. That background shaped the government’s December 2025 strategy paper — a 91-page cross-government framework backed by over £1 billion in investment, with the explicit target of halving violence against women and girls within a decade.
The strategy rests on three pillars: prevention and early intervention targeting root causes among men and boys, aggressive pursuit of perpetrators, and systemic support for victims. The 48-hour takedown rule is the enforcement mechanism for the online dimension of that strategy — converting what was a voluntary industry commitment into a legal obligation with material financial penalties.
Industry and Campaigner Response
Campaigners welcomed the announcement but questioned whether the timeline goes far enough. Hanna Basha, the lawyer who represented television personality Georgia Harrison in her civil revenge pornography case, asked why the deadline was not 24 or even 12 hours, arguing that every hour images remain online compounds the harm. She also raised a structural problem: many victims cannot find where or how to report abusive content in the first place.
The End Violence Against Women Coalition, the Revenge Porn Helpline, and #NotYourPorn — which together delivered a 73,000-signature petition to Downing Street — called the announcement a campaign win while remaining clear-eyed about the scale of the challenge. The industrial volume of deepfake production, enabled by increasingly accessible AI tools, means legislation must compete with technology that evolves faster than any parliamentary timetable.
Musk responded to earlier UK online safety measures by calling the government “fascist” on X in January 2026 — a comment that underlines the diplomatic friction likely to accompany enforcement. Whether the 48-hour window represents a genuine transfer of accountability from victims to platforms, or simply codifies what responsible companies should already be doing, will depend entirely on Ofcom’s willingness and capacity to enforce it.