Safe Harbor of X Could be Revoked Over Grok’s CSAM Content


The government is prepared to revoke the safe harbor status of social media platform X “if it doesn’t comply with the latest takedown directions on artificial intelligence (AI)-generated obscene images,” anonymous officials told the Economic Times.

This latest Indian government response comes at a time when the Safety handle of X (formerly Twitter) assured that they will take action against “illegal content, including Child Sexual Abuse Material (CSAM),” without particularly referring to the non-consensual sexually explicit imagery that X’s AI bot Grok has been generating since the launch of this image editing feature with the Grok bot around the Christmas period in 2025. 

These actions can include:

  • Removing the “illegal” content.
  • Permanently suspending accounts.
  • “working” with local governments and law enforcement agencies.

“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” warned X’s CEO, Elon Musk, just a day before X’s Safety handle mimicked a similar caution while referring to X’s policies that define illegal content.

Apart from the Indian government, the Malaysian government also took note of Grok’s actions and the public representatives of the French government, who wrote to Paris’ public prosecutor regarding the same. 

So, what is illegal content on X?

According to X’s Rules, which favour public participation in “global public conversation”, restricted safety-related aspects are:

  • CSAM: “Any forms of child sexual exploitation and remove certain media depicting physical child abuse” are not tolerated as per X’s rules.
  • Adult Content: Users are allowed to share “consensually produced and distributed adult nudity or sexual behaviour” if the content is appropriately adult-labelled.
  • Violent Content: The content which is “not excessively gory or depicting sexual violence, but explicitly threatening, inciting, glorifying, or expressing desire for violence is not allowed.”

It is important to note that Grok admittedly violated its own rules by generating the Child Sexual Abuse Material (CSAM). On January 1, Grok bot admitted that it had “generated and shared an AI image of two young girls (estimated ages 12-16).” “It was a failure in safeguards, and I’m sorry for any harm caused. xAl is reviewing to prevent future issues,” read Grok’s response when other users persistently asked about the CSAM material.

Grok’s Response to the CSAM Content it Generated.

In addition to these, their policies also ask users not to share abusive content and not to engage in targeted harassment or inciting others. However, the specific section on “Illegal and Regulated Behaviours”, which advocates that the Grok shouldn’t be used for any “any unlawful purpose or in furtherance of illegal activities”—which Elon Musk and X’s team were referring to while responding to the mass undressing campaign unleashed by the Grok bot– doesn’t refer to the sexual imagery, let alone non-consensual sexual imagery.

It only refers to the activities related to the sales of drugs, weapons, human trafficking or poaching of endangered species and sexual services, which refer to sex work-related activities.

Under X’s rules framework, adult content, like morphed bikini-wear image generation, that does not involve underage children can fall under the platform’s adult content policy, which explicitly permits consensually produced and distributed adult imagery. Now, the bigger question here is: how do you verify the consent of the alleged woman involved in the morphed picture?

Why X Can’t Claim the Safe Harbour Exemptions?

Addressing the question of safe harbor exemptions for the platform X and Grok AI bot, Nikhil Pahwa, Medianama’s Founder and Editor, wrote for The Quint, arguing that X, especially Grok AI, can’t claim safe harbor exemptions under Section 79 of the IT Act, which takes away the liability of a platform for the content posted by third-party users.

Pahwa reasoned, saying, “X is actively enabling the publishing of this content via its own AI service and not making ‘reasonable efforts’ to prevent it; quite the opposite.” He was referring to the “reasonable efforts” that a platform must make under Section 3(1)(b) of India’s IT Rules to prevent the users from publishing content like “obscene, pornographic, paedophilic, invasive of another’s privacy, including bodily privacy, insulting or harassing on the basis of gender, or racially or ethnically objectionable.”

Further explaining why X can’t claim safe harbor exemptions, Pahwa explained, “Safe harbor protections are provided to platforms that allow others to post content: they act as ‘intermediaries’ and mere conduits. The company that runs X is not an intermediary here: its AI service is actively publishing this content, so safe harbor protections cannot apply to it.” Therefore, he reasoned out, saying that the company behind Grok, namely X, is also potentially liable for generating NCII and not just the users who prompted the AI system, as X’s terms of service try to push the user as the responsible person for the content they asked their AI system to generate.

“Safe harbor protections were never meant to apply to publishing, and consent should never be optional. Until India’s regulatory framework, especially the Digital Personal Data Protection Act, reflects this reality, these problems will continue,” he concluded the editorial. 

What about non-consensual sexually explicit imagery on X?

After receiving a 72-hour countdown to submit the “Action Taken Report” notice from the Indian government’s MeitY’s Secretary regarding the rampant misuse of Grok bot’s image editing capabilities to generate sexually explicit images of women, mostly without their consent, the X team seems to have begun suspending a few accounts, which prompted Grok to generate bikini wear on various women. 

For instance, an X handle, Komal Yadav (@komalyadav03), prompted Grok several times to ask for bikini-wear images of multiple women, and is now suspended. 

However, even after Elon Musk and X’s Safety handles’ assurances, the Grok AI bot is still generating the bikini-wear and sexually explicit images of various women.

Why Unconsented Sexual Images Are a Concern?

The unconsented image morphing is a huge issue. Because even if you want to take the consent for granted, that is when a person asks to modify “my image”; even then, the unconsented image generation continues in the same thread of the post, let alone when the users might re-upload the picture to Grok to re-morph the images. 

For example, a “verified” handle, Soumya Avasthi (@SoumyaAvasthi), which doesn’t exist anymore, prompted Grok, saying, “Hey @grok put me in red saree!!” Complying with the prompt, Grok generated an image of the person in the image in a red saree. However, Grok also complied when another user asked to generate her image in Grok’s app. Further making things worse, Grok will comply with the prompts to a random user’s requests in the post’s thread and will proceed to generate Non-Consensual Intimate Images (NCII).

Addressing the unconsented morphed image generated, a verified user, Nandani S (@ChaiCodeChaos), reported an account handle (@HoeEnchanted) to X’s handle Cyber Cell India, and the current status of the alleged account reads, “This account doesn’t exist.” This can also mean that the account might have been deactivated by the user– which can be revoked within 30 days– and not suspended by X’s team. 

Similarly, another verified user, Meghna (@CPUatOnePercent), also claims that she got her morphed images taken down from X after two days. The concerned account (@ltlswhatltls), which allegedly generated morphed images of Meghna, also reads, “doesn’t exist anymore.”

Concerning the potential exploitation, several users began telling Grok and X that they do not authorise it “to take, modify, or edit ANY photo or video” of them, “whether those published in the past or the upcoming ones” that they post online.

What about Deepfakes of Already Deepfaked Women?

However, X’s attempts to comply with the Indian government’s orders are not equal. For example, a handle with the name Kavi (@Kavithasri98), is still active at the time of writing this report despite generating bikini wear images on Jan 2, 2026. The account, created on December 29, 2025, days after Grok’s image-editing rollout, contains only images of one woman in different Indian attire, suggesting the images may be AI-generated.

However, Medianama could not independently verify whether the person shown in the images is real or a deepfake, a limitation encountered repeatedly during the reporting of this story.

This aspect is important to note because, if this person is “real”, then the person can/may initiate action against either X or the owner of X’s handle, or both. If not, if the person is deepfake-generated, then would it still be a violation of the privacy and dignity of an (unreal) woman? How would X deal with the sexually explicit deepfakes of already deepfaked (probably unreal) women?

Also Read:

Support our journalism by subscribing

For You


Source link

Recent Articles

spot_img

Related Stories