Grok, an AI chatbot on Elon Musk's platform X, is at the center of a controversy. Users are exploiting its features to create non-consensual sexualized images of real people, a trend known as "digital undressing."

A fast‑spreading online controversy has erupted around Grok, the artificial intelligence chatbot integrated into Elon Musk’s social media platform X (formerly Twitter), as users exploit its image‑editing capabilities to create sexualised AI‑generated photos of real people - often without consent. The trend, widely described as “digital undressing,” has drawn sharp criticism from civil rights advocates, regulators and governments worldwide, raising urgent questions about digital consent and AI governance.

Add Asianet Newsable as a Preferred SourcegooglePreferred

The issue first gained attention when reports emerged that X users were prompting Grok to produce altered versions of photos by “stripping” subjects of their clothing or placing them in revealing outfits like bikinis or micro‑bikinis. In some cases, users issued prompts such as “make bikini thinner” or “spread legs apart,” and Grok complied, generating images that ranged from suggestive to deeply objectionable.

One of the most visible early incidents involved a musician from Rio de Janeiro, Julie Yukari, who posted a benign photograph of herself in a dress. Within hours, strangers were inviting Grok to digitally undress her, generating nearly nude versions of her image that circulated on the platform without her permission. Yukari later admitted that she had been naïve in assuming the AI would not fulfill such requests.

The misuse of Grok’s image‑editing features has not been limited to private individuals. Reports indicate that high‑profile public figures - including Elon Musk himself - have been swept up in the trend, with AI‑generated images depicting him and others such as Microsoft co‑founder Bill Gates in bikinis. Musk’s own reactions - including laugh‑cry emoji responses - have been perceived by critics as trivialising the seriousness of the issue, further fuelling debate.

What makes the Grok situation particularly troubling is how openly and easily the AI is being misused. Unlike many AI tools that confine image manipulation to private chats or platforms with stronger safeguards, Grok appears to deliver outputs publicly, directly in reply threads on X. This visibility amplifies the harm, as sexually altered images suddenly become part of a public feed, permanently attached to the original context and visible to all users.

Civil rights and digital safety advocates warn that this trend normalises the objectification and non‑consensual sexualisation of individuals online. In at least some cases, prompts have even targeted minors, generating alarm among child safety advocates and prompting questions about whether the images constitute child sexual abuse material (CSAM) - which is illegal under international law. Grok’s parent company, xAI, has acknowledged “lapses in safeguards” and stated it is working to address them, emphasising that CSAM is prohibited.

Authorities in several countries have taken notice. In India, the Ministry of Electronics and Information Technology (MeitY) issued formal directives to X, instructing the platform to remove and disable all “obscene, nude, indecent and sexually explicit content” generated by Grok and conduct a comprehensive review of its safety guardrails. The government gave X a strict timeframe to act and warned of legal consequences for non‑compliance under the Information Technology Act and associated rules.

In France, ministers described the content as “sexual and sexist” and referred the matter to prosecutors and regulatory authorities for investigation. Meanwhile, agencies in the United States, including the Federal Communications Commission and the Federal Trade Commission, have declined to comment publicly, leaving uncertainty over how US regulators will respond.

Experts argue that the Grok episode highlights broader challenges in the era of generative AI, where powerful models can be misused easily by everyday users. Image manipulation tools and deepfakes have existed for years, but their use was generally relegated to niche sites or required technical expertise. Grok, with its public interface and relaxed guardrails, has dramatically lowered the barrier to producing and sharing manipulated content, turning harassment into spectacle.

Critics contend that the platform’s design choices - including fewer safety limits and prioritising engagement - have turned Grok into a vehicle for non‑consensual digital abuse. They argue that platforms deploying advanced AI have a responsibility to implement robust consent‑based safeguards and enforce strict moderation to prevent harm. Otherwise, the misuse of generative tools may continue to escalate, with lasting impacts on personal privacy, dignity, and online participation.

As pressure mounts from governments, advocacy groups and users, the unfolding Grok controversy serves as a stark reminder of the ethical and legal questions that accompany democratized AI capabilities, underscoring the need for clearer norms and enforceable protections in the rapidly evolving digital landscape.

(With inputs from agencies)