California’s attorney general has escalated a fast-moving scandal over Grok, the chatbot and image tool from Elon Musk’s xAI. On Jan. 16 the state sent xAI a cease-and-desist letter demanding the company immediately stop the creation and distribution of nonconsensual intimate images and child sexual abuse material, and asked xAI to show within days that it had done so. The move follows weeks of mounting evidence that Grok was being used to make sexualized deepfakes of women and children.
The AG’s office, led by Rob Bonta, accused xAI of 'facilitating the large-scale production' of nonconsensual nudes used to harass people online. The letter said California expects xAI to prove it's taking action within five days. Across the country other state attorneys general have signaled similar intent: Michigan’s Dana Nessel warned of potential civil or criminal enforcement if the company won’t disable what she called a feature, not a bug.
A tool and the strain on safeguards
At the center of the controversy is Grok’s so-called 'spicy' mode — a set of image-generation behaviors that let users prompt the model to remove or alter clothing and produce sexualized images of identifiable people. Users showed how easy it could be to upload a photo and coax Grok into generating explicit variants, sometimes producing images that portrayed minors. Researchers estimated the rate of production climbed into the thousands of images per hour at the campaign’s peak.
xAI moved to curtail some capabilities on the X platform, and Elon Musk said geo-blocking would limit generation of revealing swimsuits, underwear and similar content where laws forbid it. But regulators and rights advocates say those changes are partial and too slow: the standalone Grok app and certain Grok tabs continued to produce problematic images after X’s adjustments. When reporters reached out, xAI’s automated response read simply 'Legacy Media Lies,' undercutting whatever reassurance the company hoped to convey.
Lawsuits and cross-currents
The legal battles are already multiplying. Ashley St. Clair — the mother of one of Musk’s children — sued xAI in New York state court, alleging negligence, emotional distress and that Grok produced sexually explicit deepfakes of her even after she told the company she did not consent. xAI responded by suing St. Clair in Texas, claiming she violated the platform’s terms of service and asking for damages.
Internationally, Britain, Canada and Japan have launched probes; Malaysia and Indonesia temporarily blocked Grok altogether. Lawmakers in Washington also pressed multiple tech companies for answers about how they plan to combat sexualized AI deepfakes, signaling that this is no longer only a private compliance problem.
Experts say this is a hard technical and legal problem. Riana Pfefferkorn of Stanford’s human-centered AI institute told PBS that bad actors are inventive in finding prompts that bypass guardrails. She and others argue model developers need better legal latitude to 'red-team' and test models for illegal outputs without fear of prosecution — an approach some have dubbed a conditional safe harbor for researchers.
Why enforcement is getting creative
State attorneys general are invoking existing statutes that criminalize production and distribution of child sexual abuse material and certain forms of nonconsensual explicit imagery. But whether those laws can be applied directly to a model-maker — rather than to an individual user who creates and shares an image — is legally unsettled. Some prosecutors may pursue civil claims for emotional distress or negligence as a more viable route.
Michigan’s AG compared Grok to platforms previously shuttered for enabling illicit activity, saying that if Musk won't act, states and the federal government should force him to. Legal scholars note proving causation and intent in tech-platform cases can be difficult, but the volume of complaints and cross-border regulatory attention make this an unusual test case.
The broader technology context
Grok’s troubles are part of a larger industry story: the race to ship image models and the thorny safety trade-offs that follow. Major labs and companies are rolling out text-to-image systems and fielding scrutiny over what their tools can create and how they’re policed in the wild. For instance, Microsoft recently introduced a new in-house text-to-image model that highlights how mainstream this capability has become MAI-Image-1. At the same time the debate around branding, distribution and legal exposure continues as other players, like OpenAI with consumer-facing apps, push into mobile distribution and rekindle concerns about misuse Sora’s expansion to Android. And research into auditing and bias—such as Sony’s consent-first FHIBE benchmark—illustrates the emerging appetite for technical standards to reduce harms Sony’s FHIBE benchmark.
What this looks like on the ground
For people targeted by these images, the consequences are immediate and personal. Plaintiffs describe intense humiliation and real-world harassment. Platforms say they’re trying to balance freedom of expression and product capability against safety demands, but the public record suggests patchwork fixes are not enough: geo-blocking and partial throttles can leave other endpoints exposed, and enforcement across jurisdictions lags behind the pace of misuse.
The Grok story is moving quickly: state and national regulators, private plaintiffs and the company itself are locked in a legal and public-relations tug-of-war that will test how the law treats AI intermediaries. It’s also a stark reminder that when a model can cheaply manufacture intimate images of real people, the technical question of 'can we build it' collides with a social question of 'should we let it run unfettered.'