X Admits ‘Mistake’ in Obscene Images Row: Over 600 Accounts Removed Amid Public Outcry

Obscene images row: ‘X’ admits ‘mistake’; thousands of posts, over 600 accounts taken down

In a rare public acknowledgment of failure, social media platform X (formerly Twitter) has admitted it made a ‘mistake’ in handling a recent surge of explicit and obscene images circulating widely across its network. The admission comes amid mounting pressure from Indian authorities and users after thousands of disturbing posts—many allegedly depicting non-consensual or manipulated imagery—appeared on the platform with alarming ease.

The fallout has been swift: X confirmed it has taken down more than 600 accounts and deleted thousands of violating posts as part of an emergency content sweep . But for many users, especially women and privacy advocates, the damage is already done. This incident has reignited fierce debates about platform accountability, AI-driven moderation failures, and whether current laws can keep pace with digital abuse.

Table of Contents

What Happened in the Obscene Images Row?

Over the past week, Indian users began reporting a flood of graphic, sexually explicit images appearing in search results, replies, and even trending topics on X. Many of these images were not just pornographic—they were doctored photos of real individuals, including celebrities, students, and professionals, created using AI deepfake technology or crudely edited composites .

Hashtags and coded language were used to evade basic keyword filters, allowing the content to spread rapidly before moderators could intervene. Victims reported receiving harassment, threats, and unsolicited messages referencing the fake images—a form of digital violence that can have severe psychological and professional consequences.

X Admits ‘Mistake’ and Takes Action

In an official statement issued on January 10, 2026, X acknowledged the breach in its safety systems. “We made a mistake in our enforcement protocols,” the company said, without specifying which internal policy failed . The platform added that it had since removed “thousands of posts” and suspended “over 600 accounts” linked to the distribution of obscene material.

X also claimed to have enhanced its automated detection tools and escalated human review teams in the Asia-Pacific region. However, critics argue this reactive approach is too little, too late. “Platforms can’t wait for public outrage to fix what should never have happened in the first place,” said digital rights activist Anja Kovacs of the Internet Democracy Project .

Why This Is a Major Issue in India

India is one of X’s largest user bases, with over 24 million active accounts . Yet it’s also a country where online gender-based violence is rampant. According to a 2025 report by the National Commission for Women, nearly 68% of women surveyed experienced some form of image-based abuse online—including non-consensual sharing of intimate photos and AI-generated fakes .

The obscene images row has hit a nerve because it exposes how easily malicious actors can exploit lax moderation to target vulnerable individuals. In several reported cases, the fake images were shared with captions naming schools, workplaces, or hometowns—turning digital harassment into real-world danger.

How X’s Content Moderation Has Changed Under Musk

Since Elon Musk acquired Twitter in 2022 and rebranded it as X, the company has faced consistent criticism for weakening content safeguards. Key changes include:

  • Mass layoffs of trust and safety teams—reducing staff by over 80% in some departments .
  • Reinstating banned accounts, including those previously suspended for hate speech or harassment.
  • Prioritizing ‘free speech’ over user safety, often at the cost of marginalized communities.
  • Reducing transparency around content moderation reports and policy enforcement.

Experts warn that this philosophy creates fertile ground for exactly the kind of crisis now unfolding in India. As Stanford Internet Observatory researcher Renee DiResta notes, “When you dismantle guardrails, you don’t just invite trolls—you enable predators” .

Under India’s Information Technology (Intermediary Guidelines) Rules, 2021, platforms like X are required to remove unlawful content within 36 hours of notification. Failure to do so can result in loss of legal immunity and potential criminal liability for executives .

The Ministry of Electronics and IT (MeitY) is reportedly reviewing X’s response to this incident. If found negligent, the company could face fines or even restrictions under Section 69A of the IT Act. Beyond legality, there’s an ethical question: should a global platform be allowed to operate with minimal local oversight in a country of 1.4 billion people?

What Users Can Do to Stay Safe

While systemic change is needed, individual users can take steps to protect themselves:

  1. Lock your profile: Set your account to private to control who sees your content.
  2. Report immediately: Use X’s reporting tools for obscene or harassing content.
  3. Document evidence: Take screenshots (with URLs) before content is deleted.
  4. Contact authorities: File a cybercrime complaint at cybercrime.gov.in.
  5. Avoid engagement: Don’t reply to or share abusive posts—it fuels algorithms.

For victims of deepfakes or non-consensual imagery, organizations like [INTERNAL_LINK:digital-safety-resources] offer free legal and psychological support.

Conclusion: A Wake-Up Call for Social Media Giants

The obscene images row is more than a technical glitch—it’s a symptom of a broken content moderation ecosystem. X’s admission of a ‘mistake’ is a start, but real accountability means investing in proactive safety measures, hiring diverse moderation teams, and respecting local laws. Until then, users—especially in countries like India—will remain at risk. As one victim poignantly asked: “How many lives must be ruined before platforms decide we’re worth protecting?”

Sources

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top