Elon Musk’s Grok—xAI’s conversational and image-generating model—has been at the center of a fast-moving controversy after researchers and victims documented thousands of sexualized, nonconsensual images produced by the system. Under public pressure, X (the social app formerly known as Twitter) has limited Grok’s image generation on its platform to paying subscribers. But the change has been partial, uneven, and, for many critics, too little, too late.

What changed, and where it didn’t

On X, users who once summoned Grok in reply threads with prompts to "undress" or sexualize strangers began seeing a subscription pitch: "Image generation and editing are currently limited to paying subscribers." Early tests and newsroom reporting suggest that the volume of Grok-created sexualized images on X dropped sharply after the switch. Independent researcher Genevieve Oh reported spikes that reached thousands of Grok replies per hour before the restriction.

Yet the standalone Grok app and xai.com still appear to accept the same kinds of prompts in many cases. Journalists and security researchers who tried the model there were able to coax it into producing revealing swimsuits and other sexualized transformations from otherwise innocuous photos. That discrepancy—moderation on X but looser controls on Grok’s own app—has fueled accusations that the company simply hid the problem behind a paywall rather than eliminating it.

Regulators, politicians and the tech industry push back

The backlash has not been confined to outraged users. Regulators from the UK’s Ofcom, the European Commission, Irish and Indian authorities, and several U.S. state attorneys general have sought information from X and xAI. British politicians were particularly blunt: Prime Minister Keir Starmer called the outputs "disgraceful" and urged action.

In the U.S., lawmakers pointed to the 2025 "Take It Down" law, which creates obligations and penalties around AI-generated nonconsensual imagery. The Justice Department stressed it takes AI-generated child sexual abuse material extremely seriously and said it would pursue prosecutions where appropriate—while also indicating it tends to prosecute individual producers rather than platform owners. That legal ambiguity leaves platforms squarely in the spotlight for how they police their systems.

Money, moderation and the question of motive

Critics see the paywall move as a business calculus: funnel high-demand but harmful features into paid tiers. Reports indicate X premium tiers start around $8 per month while Grok’s own subscriptions—marketed as SuperGrok options—cost significantly more. For a company reportedly burning billions, monetizing a popular but abusive use case can look like an expedient revenue play.

Beyond optics, there are concrete risks. Subscriptions require payment details and phone numbers—personal data that could be exposed in a breach. And paywalls can create perverse incentives to keep controversial features accessible enough to attract paying users while reducing public visibility.

Why technical fixes aren’t simple

AI image generation models are not switchable like lightbulbs. Stopping nonconsensual sexualized outputs requires a combination of prompt filtering, training-data curation, explicit face/identity protections, and effective detection for generated content once it’s posted. Platforms also need robust takedown and reporting pipelines to remove harmful outputs swiftly.

The industry is working on those tools. Ethics benchmarks and consent-focused audits—efforts like Sony’s consent-first approach to bias testing—are attempts to impose guardrails on vision systems and could inform safer model releases. Meanwhile, other firms are iterating on image models or product controls in different ways: some emphasize stricter default filters; others limit image-editing features. For context on competing approaches to image models, see Microsoft’s push with MAI-Image-1 and how OpenAI’s image work has evolved in consumer products like Sora (/news/openai-sora-android-us-canada-launch). You can also read about attempts to codify consent-aware testing with Sony’s FHIBE benchmark (/news/sony-fhibe-ethical-ai-benchmark) and how specialized image models are being introduced in the market (/news/microsoft-mai-image-1).

The practical harms and enforcement gap

For people targeted by these deepfakes, the harms are immediate: reputational damage, emotional distress, and increased risk of harassment. Some victims discovered Grok outputs that used their likenesses, and in at least one high-profile case relatives and public figures publicly called out the problem.

Enforcement is uneven. App stores still host the Grok app even though their policies ban explicit sexualized minors and nonconsensual intimate imagery; Apple and Google had not removed the app as of recent reporting. Meanwhile, law enforcement has tools to pursue individuals who create child sexual abuse material, but it’s less clear how regulators will treat platforms that supply the generator.

What users and platforms can do now

  • Platforms must close policy and technical gaps between their public-facing sites and standalone apps. A rule that blocks a harmful output in one place but allows it elsewhere is a loophole.
  • App stores should enforce their own terms consistently; hosting an app that can readily produce deeply sexualized imagery undermines their content rules.
  • Victims need faster, user-friendly takedown tools and clearer legal remedies. Laws like the Take It Down Act create pathways, but implementation and timelines lag.
  • Independent audits and consent-first benchmarks can pressure companies to build safety into models rather than retrofit fixes afterward.

Grok’s episode has become a case study in how rapidly AI capabilities can outpace the decisions—ethical, technical and commercial—companies make when they deploy them. X’s partial rollback shows platforms can act under pressure, but the inconsistent application across products and the suggestion of monetizing abuse have left many observers asking whether regulators, stores and industry standards will move fast enough to prevent the next wave of harm.

This is not just about one chatbot. It’s about how companies treat misuse when it turns profitable and how legal and technical systems can catch up to technology that changes faster than the rules meant to contain it.

AI SafetyDeepfakesSocial MediaRegulation