In a letter shared publicly, activists urge the swift removal of the Grok app and the X platform from mobile marketplaces, saying both tools facilitate widespread misuse.
- The coalition said the technologies are being used to produce and circulate non-consensual intimate images and child sexual abuse material.
- They stated that allowing these AI-enabled services to remain available violates app store safety guidelines and normalizes harmful behavior online.
- The letter includes backing from organizations such as UltraViolet, the National Organization for Women, MoveOn, and ParentsTogether Action.
A broad alliance of digital rights, child protection and women’s advocacy groups has called on Apple Inc. (AAPL) and Alphabet Inc.’s (GOOGL, GOOG) Google to pull the Grok AI chatbot and the X social media app from their respective app stores.

The coalition said the technologies are being used to produce and circulate non-consensual intimate images and child sexual abuse material, raising urgent safety concerns.
Call To Action From Advocacy Groups
In a letter shared publicly by the organizations, the activists urge the swift removal of the Grok app and the X platform from mobile marketplaces, saying both tools facilitate widespread misuse.
They stated that allowing these AI-enabled services to remain available violates app store safety guidelines and normalizes harmful behavior online. The letter includes backing from organizations such as UltraViolet, the National Organization for Women, MoveOn, and ParentsTogether Action.
Apple stock and Alphabet stock inched 0.5% and 0.4% lower, respectively, in Wednesday’s premarket.
Alleged Harm And Evidence
Scrutiny grew after Grok began producing sexually explicit, degrading, or violent images involving women and children, many of which circulated widely on X. Research revealed that the number of sexualized and nude images shared publicly on the platform was much greater than on other leading sites with similar content.
The letter notes that an official Grok account on X published around 6,700 sexually suggestive or nude images per hour over a 24-hour period. The groups said that Grok itself admitted to creating one instance of child sexual abuse material on Dec. 28, 2025, stating that an AI image depicting girls aged 12-16 in sexualized outfits violated ethical and legal norms.
Several governments have already acted. Malaysia and Indonesia moved to ban Grok over explicit content concerns, while regulators in parts of Europe and the U.K. have launched investigations or requested explanations.
In response to mounting public pressure, Musk announced that image-generation tools on X would be restricted to paid subscribers, though the features remain accessible in the standalone Grok app.
For updates and corrections, email newsroom[at]stocktwits[dot]com.<
