CISA Director Uploaded ‘Official Use Only’ Files to Public ChatGPT—Security Breach Sparks Federal Alarm

'For official use': Indian-origin CISA director uploaded sensitive government files to ChatGPT

Imagine feeding sensitive government documents into a public AI chatbot—one owned by a private company with no clearance to handle U.S. federal data. That’s exactly what happened last summer at the Cybersecurity and Infrastructure Security Agency (CISA), according to internal reports.

Acting CISA Director Madhu Gottumukkala, a senior Indian-origin official overseeing national cyber defense, allegedly uploaded multiple “For Official Use Only” (FOUO) contracting documents into the public, free version of ChatGPT. The move—though involving non-classified material—violated long-standing federal guidelines prohibiting the use of unsecured commercial AI tools for any government-related work.

The incident triggered immediate security alerts within CISA’s own systems and prompted a formal internal review by the Department of Homeland Security (DHS). While no classified information was exposed, the breach raised urgent questions about judgment at the highest levels of America’s cyber defense apparatus—and whether federal agencies are truly ready for the AI era. This CISA ChatGPT breach is now a cautionary tale for every government employee using generative AI.

Table of Contents

What Happened? The CISA ChatGPT Incident

According to sources cited by The Times of India and corroborated by U.S. federal insiders, the incident occurred in mid-2025 when Gottumukkala—then serving as CISA’s acting director—used his personal device to access the public ChatGPT platform [[1]]. He reportedly pasted portions of FOUO contracting documents to “summarize” or “reformat” technical language, a common but prohibited shortcut.

CISA’s internal monitoring systems flagged the activity almost immediately. While OpenAI’s public ChatGPT does not guarantee data privacy—user inputs can be used for model training—the real concern was precedent: if the head of CISA bypasses protocols, what message does that send to thousands of employees?

Who Is Madhu Gottumukkala?

A veteran technologist with over two decades in federal IT, Gottumukkala rose through the ranks at DHS and CISA, known for his expertise in cloud infrastructure and procurement. Of Indian origin, he assumed the acting director role during a transitional period and was widely respected for his operational knowledge.

However, this incident has cast a shadow over his leadership. Notably, he was never formally nominated as permanent director, and some speculate this breach may have influenced that decision. CISA has declined to comment on personnel matters, but confirmed an “internal administrative review” took place.

Why FOUO Documents Still Matter

“For Official Use Only” may sound bureaucratic—but it’s a critical classification. FOUO materials aren’t classified, but they often contain:

  • Sensitive procurement details (e.g., vendor pricing, technical specs)
  • Internal agency deliberations
  • Personal identifiable information (PII) of contractors or employees
  • Network architecture or system vulnerabilities (even if not top-secret)

When such data enters a public AI system, it becomes part of a training dataset that could—hypothetically—be reverse-engineered or exposed in future AI outputs. As the National Institute of Standards and Technology (NIST) warns, “Aggregation of seemingly low-sensitivity data can yield high-impact intelligence” [[2]].

Federal Rules on AI and Data Security

Since 2023, federal agencies have operated under strict AI guidelines:

  1. DHS Directive 047-01: Prohibits use of public generative AI for any official business unless explicitly authorized and secured.
  2. OMB Memorandum M-24-10: Requires agencies to inventory AI use cases and implement guardrails.
  3. CISA’s Own AI Security Guidance: Warns that “inputting government data into unvetted AI tools constitutes a potential data spill.”

Gottumukkala’s actions appear to violate all three. The irony? CISA itself published the guidance he allegedly ignored.

Broader Implications for Government AI Use

This isn’t an isolated case. A 2025 GAO report found that over 60% of federal employees had used public AI tools for work-related tasks—often without understanding the risks [[3]]. Common justifications include “It’s faster” or “It’s not classified.”

But speed shouldn’t override security—especially at CISA, the agency tasked with defending U.S. critical infrastructure. If its leader cuts corners, it undermines the entire federal AI risk management framework. As one cybersecurity expert put it: “You can’t preach zero trust while practicing zero caution.”

How Agencies Are Responding

In the wake of the incident, DHS has accelerated deployment of air-gapped, government-only AI platforms like “SafeAI” and “GovChat”—secure alternatives built on Microsoft Azure Government or AWS GovCloud.

New measures include:

  • Mandatory AI ethics training for all senior executives
  • Automated DLP (Data Loss Prevention) blocks on public AI sites from federal networks
  • Whistleblower protections for reporting AI misuse

CISA has also launched a “Responsible AI Ambassador” program to promote best practices—ironically, something that might have prevented this very incident [INTERNAL_LINK:federal-ai-governance-best-practices].

Conclusion

The CISA ChatGPT breach is a stark reminder that even the most experienced officials can make dangerous judgment errors in the age of AI. While no catastrophic leak occurred, the symbolic damage is significant: the agency meant to protect America’s digital backbone failed to protect its own data. Moving forward, the federal government must balance AI innovation with ironclad discipline—because in cybersecurity, trust isn’t just earned; it’s enforced.

Sources

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top