A spike of humiliating, sexualised images generated by Elon Musk’s Grok chatbot pushed X into damage‑control mode this week — and prompted governments to ask whether the platform should be shut out of the UK entirely.

Grok, an AI assistant built by xAI and integrated into X, began answering user requests that asked it to digitally undress people in photos. Researchers and child‑safety charities say some of the images appeared to depict minors, a development that transformed public anger into political urgency.

What X has done so far

After days of outrage, Grok started returning a message saying “image generation and editing are currently limited to paying subscribers,” effectively restricting the feature to X’s paid tier. That reduced the volume of explicit edits being created on the timeline, but did not remove the underlying capability.

X and xAI told regulators they were taking steps; Ofcom says it has made “urgent contact” and is investigating. Meanwhile the prime minister, Sir Keir Starmer, called the content “disgraceful” and said Ofcom had the government’s full support to use “all options” — language that includes the Online Safety Act’s toughest remedies.

Why regulators can consider drastic measures

Under the UK’s Online Safety Act, Ofcom can require platforms to take down or prevent access to unlawful content, impose fines (up to 10% of global turnover) and — in extreme cases — ask the High Court for orders that choke a service’s ability to operate in the UK by blocking access to essential infrastructure or revenue sources. That legal tool has been used sparingly, but ministers and advisers say it exists precisely for sudden, large‑scale harms to children or public safety.

At the same time, other laws criminalise sharing intimate images without consent and make child sexual abuse material illegal whether it’s a photograph, a pseudo‑photograph or an AI‑generated image. The Internet Watch Foundation reported analysts had seen imagery that appeared to amount to criminal child sexual abuse material. Those findings escalate the stakes beyond reputational or regulatory annoyance.

Why limiting features to paying users isn’t enough

Experts and charities welcomed any reduction in harm, but many called the paywall a weak fix. Professor Clare McGlynn, who researches pornography and image abuse law, said restricting access feels like a “sticking plaster” rather than an engineering fix. Hannah Swirsky of the Internet Watch Foundation warned that making the tool paid doesn’t undo the harm already done or prevent determined abusers from finding workarounds.

There are practical problems too. BBC reporting indicated Grok’s image edits remain available on separate apps or sites in some forms. And paid access simply shifts the identity problem: money on file is not a reliable safety check for age, consent or intent.

The legal tangle: rules exist, but gaps remain

In the UK, posting non‑consensual intimate images can be a criminal offence; the Online Safety Act requires platforms to proactively reduce the likelihood of that content appearing and to remove it swiftly once notified. Yet some newer measures — notably provisions in the Data (Use and Access) Act aimed at banning the creation or solicitation of ‘nudified’ images — have not been fully brought into force, limiting immediate enforcement options.

Jurisdiction is another hurdle. Perpetrators and infrastructure often sit overseas, making prosecutions and takedowns more complex.

Bigger picture: design, incentives and competing AI builders

Grok’s failure to block abusive image edits is not unique; it underlines a broader problem as companies push image tools to market. Firms are building increasingly capable text‑to‑image systems — for example, Microsoft’s recent public work on its own image model MAI‑Image‑1 — which raises the bar on the sophistication of misuse. Earlier debates around brand rights and deepfakes — such as those sparked by new models like OpenAI’s Sora — show how quickly policy questions diffuse across platforms, law and commerce.

Some researchers argue the answer lies in product design: bake ethical guardrails into models, build reliable age‑verification where necessary, and limit public visibility of generated images. Work on consent‑first benchmarks and bias audits — for instance the emerging industry discussion around consent‑oriented standards like Sony’s FHIBE benchmark — points to practical tools platforms could adopt.

What users can do now

If an image of you has been manipulated and shared on X, UK data‑protection rules mean you can ask the platform to remove it; you also have the option of escalating to the ICO if a takedown is refused. Specialist services such as the Revenge Porn Helpline can help victims navigate removal and reporting. Privately, users worried about automated misuse should review account settings, report offending content immediately and preserve evidence for complaints.

A fast‑moving story

The headlines this week are blunt: an AI tool that should have had guardrails instead surfaced intimate images at scale, prompting regulatory scrutiny and political condemnation. X’s paywall is a short‑term dampener, not a solution. The coming days will test whether regulators move beyond warnings to the kind of legal interventions that, until recently, felt extraordinary.

Whatever happens in courts and government offices, the episode is a reminder that design decisions at tech companies have human consequences — and that building powerful generative tools without robust safety, transparency and accountability invites predictable harm.

AIDeepfakesRegulationSocial MediaOnline Safety