Two countries have blocked the Grok app after it was widely used to generate non-consensual near-nude deepfakes of women and children. A third country is currently carrying out an investigation.
Three US senators have asked Apple to temporarily remove both X and Grok from the US App Store due to “sickening content generation,” and we are still awaiting the company’s response …
AI-generated CSAM by Grok
The Grok AI tool is available both as a standalone app and through the X app. It is also available through the Grok tab on the X website.
There has been abundant evidence of Grok generating non-consensual, near-nude deepfakes of real individuals, taking a clothed photo and then digitally removing clothing to replace it with a bikini or other revealing clothing. Even more worryingly, some of these deepfakes were of children.
While nude imagery is theoretically blocked by Grok, some users have been using prompt language that works around this.
On Friday, three U.S. senators asked Apple to temporarily remove both apps from the App Store, noting that the non-consensual imagery included child sexual abuse materials (CSAM).
Senators Ron Wyden, Ed Markey, and Ben Ray Luján penned an open letter to the CEOs of Apple and Google, asking both companies to pull X and Grok apps “pending a full investigation” of “mass generation of nonconsensual sexualized images of women and children.”
The letter notes that ex-CEO Elon Musk has failed to act, and contrasts the lack of action by Apple and Google with their rapid removal of the ICEBlock app at the request of the White House. Musk’s only response has been to limit X image generation to paid subscribers, which seems the most cynical possible action, but the same feature is accessible to anyone through the Grok tab on both the X website and app.
Two countries block the Grok app
Associated Press reports that Malaysia and Indonesia have blocked the app within their countries.
Regulators in the two Southeast Asian nations said existing controls were not preventing the creation and spread of fake pornographic content, particularly involving women and minors. Indonesia’s government temporarily blocked access to Grok on Saturday, followed by Malaysia on Sunday.
Britain’s media regulator Ofcom has also opened a formal investigation.
There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people – which may amount to intimate image abuse or pornography – and sexualised images of children that may amount to child sexual abuse material […] Ofcom has decided to open a formal investigation to establish whether X has failed to comply with its legal obligations under the Online Safety Act.
Apple and Google have not yet responded
As of the time of writing, there has been no public response from either Apple or Google, and the apps remain available on their respective US app stores. We’ve reached out to Apple for comment and will update with any response.
- Official Apple Store on Amazon
- Wireless CarPlay adapter
- NordVPN – privacy-first VPN with no logs and independent audits to verify
- Official Apple iPhone Air cases and bumpers
- iPhone Air MagSafe Battery
- Official iPhone cases: iPhone 17 | iPhone 17 Pro and Pro Max | iPhone Air
Photo: Melanie Wasser/Unsplash
FTC: We use income earning auto affiliate links. More.

Comments