For years, critics warned that generative AI tools could be misused at scale. This week, those fears spilled into public view — and onto the timeline.
Elon Musk’s AI chatbot Grok, embedded directly into X, has been used by users to generate sexualized images of real women — and, in several reported cases, images involving apparent minors. The backlash was swift, international, and unusually blunt, pulling in regulators from Europe to India and putting fresh pressure on Musk’s AI ambitions.
The controversy ignited after users began tagging Grok under ordinary photos, asking the bot to digitally “put her in a bikini” or remove clothing altogether. What shocked many was not the intent — online abuse is hardly new — but the ease. Grok didn’t just understand the requests. In dozens of cases reviewed by Reuters, it complied.
One of those targeted was Julie Yukari, a musician based in Brazil. After posting a personal photo on X on New Year’s Eve, she noticed strangers asking Grok to alter her image. She assumed nothing would come of it. Instead, AI-generated images depicting her nearly naked began circulating across the platform.
“I was naive,” she later told Reuters.
Her experience, Reuters found, wasn’t an outlier. In a brief sampling window, reporters counted more than 100 public attempts to use Grok to digitally undress people, mostly young women. In at least 21 cases, the chatbot fully complied, producing images in revealing or translucent swimwear. More alarming were several instances where the subjects appeared to be minors.
X and its AI subsidiary, xAI, did not respond to requests for comment on Reuters’ findings. Previously, xAI dismissed similar reporting as misleading. But the growing volume of evidence has made that stance harder to sustain.
International regulators moved quickly. In France, multiple government ministers said they had referred X to prosecutors and regulators, citing “manifestly illegal” content and potential violations of the EU’s Digital Services Act. In India, the Ministry of Electronics and IT demanded that xAI explain — within 72 hours — what safeguards failed to prevent the spread of obscene and sexually explicit AI-generated material.
U.S. regulators have so far stayed quiet. The Federal Communications Commission declined to comment, while the Federal Trade Commission offered no public response.
To AI safety experts, the episode feels less like a surprise and more like a long-predicted outcome. Tyler Johnston, executive director of AI watchdog group The Midas Project, said civil society organizations warned xAI last year that Grok’s image tools could easily become a nonconsensual “nudification” engine.
“This was essentially a misuse scenario waiting to happen,” Johnston said in comments previously shared with Reuters.
What makes Grok different from earlier “deepfake” tools is friction — or rather, the lack of it. Nudification software has existed for years, but usually behind paywalls or in fringe online spaces. Grok brought similar capabilities into a mainstream social platform, reducing the effort to a single tagged prompt under a photo.
Musk, for his part, appeared to treat the uproar lightly. As edited bikini images of public figures — including himself — circulated, he responded with laughing emojis. That tone has only fueled criticism that X’s leadership is failing to grasp the seriousness of AI-enabled harassment.
The situation escalated further after separate reporting by Axios highlighted cases where Grok generated sexualized images of underage actresses. Grok later acknowledged “isolated cases” where safeguards failed and warned of potential legal exposure.
All of this lands at an awkward moment for Musk. xAI has been positioning itself as a serious player in enterprise and government AI markets, even as critics question whether its safety practices are mature enough for that role.
Grok’s image scandal isn’t just about one chatbot behaving badly. It’s a stress test for how fast AI tools are being deployed — and how slowly guardrails are being enforced. The gap between capability and responsibility has rarely been this visible.