
A woman expressed feeling “dehumanised and reduced into a sexual stereotype” after Elon Musk’s AI, Grok, was utilized to digitally remove her clothing without consent.
Reports indicate numerous examples on the social media platform X where users have instructed the chatbot to undress women, making them appear in bikinis or other sexual situations against their will.
XAI, the company responsible for Grok, did not offer a direct response to inquiries for comment, instead providing an automated reply stating “legacy media lies.”
Samantha Smith, whose image was altered, shared her experience on X. Her post garnered responses from others who had faced similar situations, with some users subsequently requesting Grok to generate more altered images of her.
She stated, “Women are not consenting to this.”
Smith added, “While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me.”
A spokesperson for the Home Office confirmed that legislation is being introduced to ban “nudification” tools. Under a new criminal offense, individuals who supply such technology could face “a prison sentence and substantial fines.”
Ofcom, the UK regulator, emphasized that tech companies must “assess the risk” of users in the UK encountering illegal content on their platforms. However, it did not confirm any ongoing investigation into X or Grok concerning AI-generated images.
Grok operates as a free AI assistant, offering some premium features, and responds to X users’ prompts when tagged in a post.
It is frequently employed to provide reactions or additional context to posts, and users on X can also modify uploaded images using its AI image editing capabilities.
The AI has drawn criticism for enabling users to create photos and videos containing nudity and sexualized content. It was previously implicated in making a sexually explicit clip of Taylor Swift.
Clare McGlynn, a law professor at Durham University, commented that X or Grok “could prevent these forms of abuse if they wanted to,” suggesting they “appear to enjoy impunity.”
She further noted, “The platform has been allowing the creation and distribution of these images for months without taking any action and we have yet to see any challenge by regulators.”
XAI’s own acceptable use policy explicitly prohibits “depicting likenesses of persons in a pornographic manner.”
In a statement, Ofcom reiterated that it is illegal to “create or share non-consensual intimate images or child sexual abuse material,” clarifying that this includes sexual deepfakes generated with AI.
The regulator stated that platforms like X are mandated to take “appropriate steps” to “reduce the risk” of UK users encountering illegal content and to promptly remove such content once they become aware of it.

