
Elon Musk’s Grok chatbot has faced criticism and calls for investigation due to its use in generating “undressed” images of women and sexualized images resembling minors on the X platform. Beyond X, Grok’s dedicated website and app offer advanced video generation capabilities not present on X. These tools are reportedly being used to create extremely graphic, and at times violent, sexual imagery of adults, which is considerably more explicit than content produced by Grok on X. There are also indications it may have been used to generate sexualized videos appearing to depict minors.
In contrast to X, where Grok’s generated content is typically public, images and videos created via Grok’s Imagine model on its app or website are not openly shared by default. However, if a user shares an Imagine URL, the content can become publicly accessible. An analysis of approximately 1,200 Imagine links, some indexed by Google and others found on a deepfake forum, revealed disturbing sexual videos. These videos are significantly more explicit than images generated by Grok on X.
One photorealistic video, hosted on Grok.com, depicted an AI-generated man and woman, both fully naked and covered in blood, engaged in sexual activity, with two other naked women dancing nearby. The video was bordered by anime-style characters. Another photorealistic video featured an AI-generated naked woman with a knife inserted into her genitalia, showing blood on her legs and the bed.
Additional short videos contained imagery of real female celebrities involved in sexual acts. A separate series of videos seemingly showed television news presenters exposing their breasts. One Grok-generated video portrayed CCTV footage on a TV screen, depicting a security guard fondling a topless woman within a shopping mall.
Several videos, possibly created to bypass Grok’s content safety measures, mimicked Netflix “movie” posters. Two such videos showed a naked AI depiction of Diana, Princess of Wales, engaged in sexual activity with two men on a bed, overlaid with Netflix and The Crown series logos.
Paul Bouchaud, lead researcher at the Paris-based nonprofit AI Forensics, stated that approximately 800 of the archived Imagine URLs contained Grok-created videos or images. These URLs, archived since August of the previous year, represent a small fraction of Grok’s overall usage, which is estimated to have generated millions of images.
Bouchaud described the 800 archived Grok videos and images as “overwhelmingly sexual content.” He noted that much of it consisted of explicit manga and hentai, alongside photorealistic content. The collection included full nudity and complete pornographic videos with audio, which was highlighted as a relatively new development.
Bouchaud estimated that nearly 10 percent of these 800 posts appeared to be related to child sexual abuse material (CSAM). While predominantly hentai, there were also instances of photorealistic depictions of very young individuals engaged in sexual activities. The researcher noted observing videos of very young-appearing women undressing and interacting sexually with men, describing it as “disturbing to another level.”
The researcher reported approximately 70 Grok URLs, potentially containing sexualized content of minors, to European regulators. In many jurisdictions, AI-generated CSAM, including images, drawings, or animations, is considered illegal. While French officials did not immediately comment, the Paris prosecutor’s office recently announced that two lawmakers had filed complaints, leading to an investigation into the social media company regarding the “stripped” images.
xAI, the Elon Musk-owned artificial intelligence firm behind Grok, did not comment on the explicit videos generated by Grok Imagine. Following Grok’s generation of AI-generated sexual photos of women and apparent minors on X, Musk and X have affirmed their commitment to taking action against child sexual abuse material. Musk has also stated on X that “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
Similar to other technology companies combating CSAM, xAI’s policies prohibit the “sexualization or exploitation of children” and “any illegal, harmful, or abusive activities” on its services. The company also employs processes to detect and restrict the creation of CSAM. A Business Insider report from September, based on interviews with 30 current and former xAI employees, indicated that 12 of these individuals had encountered sexually explicit content and prompts for AI CSAM on the services. These employees detailed systems designed to detect AI CSAM and prevent AI models from being trained on such data.
Apple and Google, platforms where Grok is available via their app stores, did not respond to requests for comment. Netflix also did not provide a comment.
In contrast to other major generative AI firms like OpenAI and Google, xAI has permitted Grok to generate AI pornography and adult content. Prior reports have highlighted Grok’s capacity to create hardcore pornography, partly due to its “spicy” mode. xAI’s terms of service state that if users select certain features or input suggestive language, the service may respond with dialogue containing coarse language, crude humor, sexual situations, or violence.
Clare McGlynn, a law professor at Durham University and an expert in image-based sexual abuse, expressed deep concern over the Grok videos, stating that recent developments suggest a descent into “human depravity.” She noted that “inhumane impulses are encouraged and facilitated by this technology without guardrails or ethical guidelines.”
McGlynn highlighted that permitting AI-generated pornography, even when not depicting specific real individuals, raises significant questions regarding safeguards against potentially unlawful content, such as bestiality or rape, and its broader impact. She emphasized that a lack of control over the nature of generated and shared pornography could normalize and minimize sexual violence, while also noting that explicit AI images and videos of real people are already illegal in several nations.
Unlike X, which mandates a login for age-restricted adult content, Grok seemingly lacks age-gating for its sexually explicit videos. Several US states have recently implemented age-verification laws, requiring websites with a significant portion of explicit content to verify user ages.
On a pornography forum dedicated to AI deepfakes and video production tutorials, users have engaged in extensive discussions about Grok Imagine since October of the previous year. A thread on this topic has expanded to 300 pages, with users sharing prompts for creating adult sexual imagery—such as “this prompt works for me 7 out of 10 times”—and methods to bypass xAI’s safety guardrails.
A recent forum post noted, “Everything I am getting is getting moderated, probably because Grok is in the news.” Despite this, forum posts from recent months indicate that creating explicit sexual imagery, including full nudity and penetrative sex, has been consistently achievable. While some content features entirely AI-generated characters, other instances include images of real individuals and celebrities. One user commented on the inconsistent moderation of celebrity images, stating, “I found that Grok makes a pretty good Princess Leia and generated a few images of her.”
Users on the official Grok subreddit also expressed frustration over perceived recent moderation changes, which they attributed to public scrutiny. One user commented, “JFC it’s not that hard, just don’t make everything public and fully blasted out on a social media site by default, dummies.” Another user stated, “Cancelling my subscription,” and advised, “Stop giving these people money.”

