Skip to main content

EU also investigating as Grok generated 23,000 CSAM images in 11 days

The EU has opened its own investigation into the Grok chatbot generating child sexual abuse material. It’s estimated that Grok generated 23,000 CSAM images in just 11 days.

Despite multiple calls for Apple and Google to temporarily remove both X and Grok from the App Store, neither company has yet done so …

Grok generated 23,000 CSAM images

Like most other AI chatbots, xAI’s Grok is able to generate images from text prompts. It can do so either directly in the app, on the web, or through X. Unlike other services, however, Grok has extremely loose guardrails that have seen it generating non-consensual semi-nude images of real individuals, including children.

Engadget reports that one estimate has suggested Grok generated around 23,000 CSAM images in just 11 days.

The Center for Countering Digital Hate (CCDH) published its findings. The British nonprofit based its findings on a random sample of 20,000 Grok images from December 29 to January 9. The CCDH then extrapolated a broader estimate based on the 4.6 million images Grok generated during that period […]

Over an 11-day period, Grok generated an estimated 3 million sexualized images — including an estimated 23,000 of children.

Put another way, Grok generated an estimated 190 sexualized images per minute during that 11-day period. Among those, it made a sexualized image of children once every 41 seconds.

EU investigation opened

Earlier this month, three US senators asked Apple CEO Tim Cook to temporarily remove both X and Grok from the App Store due to “sickening content generation.” The company has not yet done so.

Two countries have blocked the app, with investigations already open in both California and the UK. The Financial Times reports that the EU has now opened an investigation also.

The probe, announced on Monday under the EU’s Digital Services Act, will assess if xAI tried to mitigate the risks of deploying Grok’s tools on X and the proliferation of content that “may amount to child sexual abuse material”.

“Non-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation,” the EU’s tech chief Henna Virkkunen said.

If the company is found to have breached the DSA, it can be fined up to 6% of its annual global revenue.

Photo by Logan Voss on Unsplash

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear