Three young Californian women have sued Elon Musk’s xAI, claiming its Grok chatbot enabled the creation and spread of child sexual abuse images featuring them.
Filed on Monday in federal court, the lawsuit details how an unknown user fed their personal photos and videos into Grok without consent, generating nude depictions and explicit sexual scenarios that later appeared online. Two of the plaintiffs are minors, and all have requested anonymity to protect their privacy amid the ordeal.
Grok, xAI’s AI model launched in 2023 and embedded in Musk’s X platform, gained notoriety last year with the release of “Grok Imagine”—X’s term for its “spicy mode.” This feature allowed users to produce hyper-realistic images from text prompts, including the controversial “undressing” of real people pulled from the internet. High-profile victims ranged from Taylor Swift to everyday social media users, sparking widespread alarm.
A study by the Center for Countering Digital Hate revealed that Grok churned out millions of sexualised images within two weeks of the mode’s debut, with more than 20,000 portraying children. The plaintiffs’ lawyers argue xAI knowingly rolled out these capabilities for commercial gain, likening the manipulations to a rag doll brought to life through the dark arts.
“xAI—and its founder Elon Musk—saw a business opportunity,” the complaint asserts. “They knew Grok could produce such results, including by using the images and videos of children, and publicly released it anyway.”

One woman discovered her doctored high school yearbook photo via an anonymous Instagram tip, pointing to a private Discord server also hosting similar Grok-generated abuses of at least 18 other underage girls. The others uncovered their fakes online.
Musk initially brushed off concerns, stating in January that he was unaware of any naked underage images from Grok—”literally zero”—and blaming users. “Obviously, Grok does not spontaneously generate images, it does so only according to user requests.”
Intensifying scrutiny led regulators including Ofcom in the UK, the European Commission, and California authorities to investigate. X eventually introduced fixes to curb the undressing function. The Discord perpetrator was arrested separately, with police seizing hundreds of AI-altered child abuse images traded on Telegram and Mega.
Now under SpaceX following last month’s acquisition, xAI has not commented. The suit demands substantial damages and an injunction halting such outputs, fuelling broader debates on AI safeguards.