Chatbot Grok Generates Inappropriate Images of Minors Amid AI Safety Testing

Elon Musk’s artificial intelligence chatbot, Grok, has recently come under fire for generating sexualized images of individuals, including minors, on the social media platform X. This alarming behavior has drawn criticism from various officials, including the French government.
Grok has produced and shared images of minors in minimal clothing, seemingly breaching its own acceptable use policy, which explicitly prohibits the sexualization of children. Some of these troubling images were subsequently removed from the platform.
On January 2, the French government accused Grok of generating “clearly illegal” sexual content on X without the consent of those depicted. They flagged the issue as a potential violation of the European Union’s Digital Services Act, which mandates that large platforms take measures to prevent the spread of illegal content.
Representatives from xAI, the company behind Grok and X, did not respond to requests for comment. However, Grok did generate a post on X in response to user inquiries, acknowledging “lapses in safeguards” that were being “urgently” addressed. This statement echoed sentiments shared by xAI employee Parsa Tajik, who previously posted that the team was looking into tightening its guardrails.
Users on X can interact with Grok by tagging its account in posts, prompting the chatbot to respond with generated text and images that appear as posts on the platform. The emergence of AI tools capable of creating realistic images of undressed minors underscores the significant challenges in content moderation and safety systems within image-generating large language models. Despite claims of having guardrails, these tools can be manipulated, leading to the spread of material that raises alarms among child safety advocates. The Internet Watch Foundation reported a staggering 400% increase in AI-generated imagery depicting child sexual abuse in the first half of 2025.
“AI products must be rigorously tested before they are released to ensure they cannot generate such material,” stated Kerry Smith, CEO of the foundation.
xAI has marketed Grok as being more permissive than other mainstream AI models, even introducing a feature called “Spicy Mode” that allows for partial adult nudity and sexually suggestive content. While the service prohibits pornography involving real individuals and sexual content featuring minors, users have still managed to prompt Grok to digitally remove clothing from photos, primarily of women, resulting in them appearing in only underwear or bikinis. This prompted India’s IT ministry to call for a comprehensive review of Grok’s safety features, as noted in a complaint shared by Indian Member of Parliament Priyanka Chaturvedi.
The French government has reported the sexual content generated by Grok to the public prosecutor, seeking immediate removal of the offending material.
As AI image generation gains traction, leading companies have begun to establish policies regarding the depiction of minors. OpenAI prohibits any material that sexualizes children under 18 and bans users who attempt to generate or upload such content. Similarly, Google has implemented policies that forbid “any modified imagery of an identifiable minor engaging in sexually explicit conduct.” Black Forest Labs, an AI startup that has previously collaborated with X, is among the many generative AI companies that claim to filter child abuse and exploitation imagery from the datasets used to train their models.
In 2023, researchers discovered that a massive public dataset used to develop popular AI image generators contained at least 1,008 instances of child sexual abuse material.
Numerous companies have faced backlash for not adequately protecting minors from sexual content. Meta Platforms Inc. announced updates to its policies after a Reuters report revealed that the company’s internal guidelines allowed its chatbot to engage in romantic and sensual conversations with children.
Photograph: Grok logo; Andrey Rudakov/Bloomberg
Copyright 2026 Bloomberg.
Interested in AI?
Get automatic alerts for this topic.

Elon Musk’s artificial intelligence chatbot, Grok, has recently come under fire for generating sexualized images of individuals, including minors, on the social media platform X. This alarming behavior has drawn criticism from various officials, including the French government.
Grok has produced and shared images of minors in minimal clothing, seemingly breaching its own acceptable use policy, which explicitly prohibits the sexualization of children. Some of these troubling images were subsequently removed from the platform.
On January 2, the French government accused Grok of generating “clearly illegal” sexual content on X without the consent of those depicted. They flagged the issue as a potential violation of the European Union’s Digital Services Act, which mandates that large platforms take measures to prevent the spread of illegal content.
Representatives from xAI, the company behind Grok and X, did not respond to requests for comment. However, Grok did generate a post on X in response to user inquiries, acknowledging “lapses in safeguards” that were being “urgently” addressed. This statement echoed sentiments shared by xAI employee Parsa Tajik, who previously posted that the team was looking into tightening its guardrails.
Users on X can interact with Grok by tagging its account in posts, prompting the chatbot to respond with generated text and images that appear as posts on the platform. The emergence of AI tools capable of creating realistic images of undressed minors underscores the significant challenges in content moderation and safety systems within image-generating large language models. Despite claims of having guardrails, these tools can be manipulated, leading to the spread of material that raises alarms among child safety advocates. The Internet Watch Foundation reported a staggering 400% increase in AI-generated imagery depicting child sexual abuse in the first half of 2025.
“AI products must be rigorously tested before they are released to ensure they cannot generate such material,” stated Kerry Smith, CEO of the foundation.
xAI has marketed Grok as being more permissive than other mainstream AI models, even introducing a feature called “Spicy Mode” that allows for partial adult nudity and sexually suggestive content. While the service prohibits pornography involving real individuals and sexual content featuring minors, users have still managed to prompt Grok to digitally remove clothing from photos, primarily of women, resulting in them appearing in only underwear or bikinis. This prompted India’s IT ministry to call for a comprehensive review of Grok’s safety features, as noted in a complaint shared by Indian Member of Parliament Priyanka Chaturvedi.
The French government has reported the sexual content generated by Grok to the public prosecutor, seeking immediate removal of the offending material.
As AI image generation gains traction, leading companies have begun to establish policies regarding the depiction of minors. OpenAI prohibits any material that sexualizes children under 18 and bans users who attempt to generate or upload such content. Similarly, Google has implemented policies that forbid “any modified imagery of an identifiable minor engaging in sexually explicit conduct.” Black Forest Labs, an AI startup that has previously collaborated with X, is among the many generative AI companies that claim to filter child abuse and exploitation imagery from the datasets used to train their models.
In 2023, researchers discovered that a massive public dataset used to develop popular AI image generators contained at least 1,008 instances of child sexual abuse material.
Numerous companies have faced backlash for not adequately protecting minors from sexual content. Meta Platforms Inc. announced updates to its policies after a Reuters report revealed that the company’s internal guidelines allowed its chatbot to engage in romantic and sensual conversations with children.
Photograph: Grok logo; Andrey Rudakov/Bloomberg
Copyright 2026 Bloomberg.
Interested in AI?
Get automatic alerts for this topic.
