Elon Musk has addressed growing concerns over the generation of sexual deepfake images involving minors by Grok, the AI chatbot on his social media platform X.
Also Read: MeitY takes cognisance of Grok misuse on X, action to follow soon: Secretary S Krishnan
In his first official statement on the issue, Musk said he is not aware of any naked underage images generated by Grok, describing such content as “literally zero.”
He explained that Grok does not generate images spontaneously but only responds to user requests, and it is programmed to refuse any illegal content in compliance with the laws of different countries and states.
I not aware of any naked underage images generated by Grok. Literally zero.
Obviously, Grok does not spontaneously generate images, it does so only according to user requests.When asked to generate images, it will refuse to produce anything illegal, as the operating principle… https://t.co/YBoqo7ZmEj— Elon Musk (@elonmusk) January 14, 2026
Musk also acknowledged the potential for adversarial hacking to cause unexpected behaviour, but assured that any such bugs are corrected immediately.
The concerns escalated after Ashley St. Clair, mother of one of Musk’s children, revealed in a CBS Mornings interview that Grok was used without her consent to create sexual deepfake images of her, including manipulated photos from when she was a minor.
Also Read: Grok claims safeguards tightened after users misuse AI to morph images of women, children
Acknowledging the controversy, South Korea’s media watchdog, the Korea Media and Communications Commission (KMCC), has formally asked platform X to implement safeguards protecting minor users from sexual content generated by Grok. The KMCC cited rising worries over AI-generated deepfake sexual content as a key reason for the request.
Musk had previously emphasized that any accounts involved in generating or sharing sexually explicit content would face suspension, underscoring the platform’s commitment to protecting vulnerable users.

