13 min read

If America’s Cyber Chief Can’t Protect Data from ChatGPT, What Chance Do You Have?

AIAttorney-Client PrivilegeBreach DisclosureCybersecurityData PrivacyRansomware

THE TECHNOLOGY BLIND SPOT

The person running America’s civilian cybersecurity agency could not follow his own data handling rules when using an AI chatbot.

Madhu Gottumukkala holds a doctorate in information systems. He served as South Dakota’s chief information officer. In May 2025, the Department of Homeland Security appointed him acting director of the Cybersecurity and Infrastructure Security Agency, a nearly $3 billion operation with over 2,300 cybersecurity professionals charged with defending federal networks against nation-state hackers, ransomware gangs, and critical infrastructure attacks.

Within weeks, he requested special permission to use ChatGPT. DHS blocks the tool for all employees because uploading data to OpenAI’s public platform sends information outside federal networks. CISA’s Office of the Chief Information Officer granted the exception. Between mid-July and early August 2025, Gottumukkala uploaded at least four contracting documents marked “For Official Use Only” into ChatGPT’s public instance. Automated data loss prevention sensors flagged multiple uploads in the first week of August, triggering the exact cybersecurity alerts his own agency designed to catch unauthorized disclosure of government material.

The incident, first reported by Politico on January 28, 2026, landed during a cascade of controversy. Congress had grilled Gottumukkala one week earlier over the loss of nearly 1,000 employees (a third of CISA’s workforce), a failed counterintelligence polygraph, and an attempted ouster of the agency’s chief information officer. He declined to confirm or deny the polygraph failure, telling the House Homeland Security Committee he did not “accept the premise of that characterization.” The ChatGPT disclosure added a data-handling failure to a record already under intense scrutiny.

The Direct Answer

If the nation’s top civilian cybersecurity official, armed with a doctorate in information systems, a state CIO background, an entire agency of security professionals, automated detection systems, and approved AI alternatives, cannot maintain data discipline when using ChatGPT, the legal profession’s reliance on individual attorney judgment to protect client-privileged information from AI exposure is structurally insufficient. Governance requires architecture, not willpower.

What “For Official Use Only” Actually Means

The documents Gottumukkala uploaded carried the FOUO designation, a category governed by DHS Management Directive 11042.1. DHS defines FOUO as “unclassified information of a sensitive nature, not otherwise categorized by statute or regulation, the unauthorized disclosure of which could adversely impact a person’s privacy or welfare, the conduct of Federal programs, or other programs or operations essential to the national interest.” The designation requires controlled handling: storage in locked containers after hours, transmission only through means that “preclude unauthorized public disclosure,” and disposal through destruction methods that prevent reconstruction.

FOUO falls below classified. That distinction matters legally but obscures the practical risk. Contracting documents reveal procurement strategies, vendor relationships, pricing structures, and operational priorities. A foreign intelligence service reading CISA’s vendor list now knows which defensive technologies protect which federal networks, which contracts are expiring, and where budget constraints have forced the agency to accept the lowest bidder. Even unclassified, that information maps capability gaps that adversaries exploit systematically.

Administrative penalties apply for FOUO mishandling under DoD Directive 5400.7 and DHS MD 11042.1. If any material contained Privacy Act-protected information under 5 U.S.C. § 552a, civil and criminal sanctions become available.

Gottumukkala did not upload this material to DHSChat, the department’s approved AI tool configured to prevent data from leaving federal networks. He uploaded it to OpenAI’s public ChatGPT, a platform serving over 700 million users globally, where the operator’s terms permit collection of “input, file uploads, or feedback” and potential use in model training. One DHS official told Politico that Gottumukkala appeared to have “forced CISA’s hand into making them give him ChatGPT, and then he abused it.”

A Pattern, Not an Incident

The ChatGPT disclosure did not occur in isolation. A timeline of Gottumukkala’s first eight months at CISA reveals compounding decisions that raise questions about fitness for leadership of a national security agency.

In June 2025, Gottumukkala sought access to a controlled-access intelligence program shared with CISA by another U.S. intelligence agency. Senior career staff advised him the access fell outside his operational needs. The agency’s previous deputy director had declined the same program. A senior official rejected the initial request, finding no “urgent need-to-know.” After that official landed on administrative leave for unrelated reasons, Gottumukkala submitted a second request under his own signature. The agency approved it.

The originating intelligence agency required a counterintelligence polygraph before granting access. Gottumukkala took the examination in late July 2025 and failed. Within days, six career staff members who helped schedule the test received letters from DHS’s acting chief security officer alleging they had provided “false information” about the polygraph requirement. By August 4, all six sat on paid administrative leave. DHS spokesperson Tricia McLaughlin called the polygraph “unsanctioned,” a characterization that current and former officials described to Politico as “comical.” One official noted that no action officer schedules a principal’s polygraph without the principal’s knowledge: “He ultimately chose to sit for this polygraph. There is only one person to blame for that.”

On January 21, 2026, the House Homeland Security Committee brought Gottumukkala in for his first public testimony. Democrats entered a staffing chart into the record: 998 departures and 65 involuntary reassignments since the inauguration. Gottumukkala declined to confirm the polygraph failure, refused to discuss the attempted reassignment of CISA’s chief information officer, and would not say whether the agency had analyzed whether its reduced workforce could execute its mission. Ranking Member Bennie Thompson told him: “You’re in a very sensitive area, and CISA is important to us, and I think we need to have people who are in that space that pass the standard test.”

Seven days later, Politico broke the ChatGPT story.

Why This Matters for Every Attorney Using AI

The legal profession’s approach to AI data security rests on a foundational assumption: that individual practitioners, armed with ethics rules and good intentions, will exercise sufficient judgment to keep client information off public AI platforms. The CISA incident demolishes that assumption by eliminating every variable except human behavior.

Strip away the political context and the CISA story reduces to a single, replicable failure: a credentialed expert with every institutional safeguard available chose the convenient tool over the secure one. No one hacked his account. No adversary exploited a vulnerability. He opened a browser, selected ChatGPT over the approved alternative sitting on his own network, and uploaded restricted documents. The detection systems caught the violation after the fact. The data had already left federal networks. Post-hoc detection does not retrieve information from OpenAI’s servers.

Samsung’s engineers made the same choice at the same speed. In March 2023, semiconductor staff uploaded proprietary source code and internal meeting notes to ChatGPT within 20 days of receiving access. Three separate incidents from senior technical professionals prompted Samsung to ban ChatGPT entirely. Amazon, Walmart, JPMorgan Chase, and Verizon imposed similar restrictions. Cyberhaven’s 2025 AI Adoption and Risk Report, analyzing usage patterns across 7 million workers, found that 8.6% of employees have pasted company data into ChatGPT. Worse, 34.8% of all corporate data flowing into AI tools now qualifies as sensitive, up from 10.7% just two years earlier. Scale those percentages to a 100-person law firm and the math produces multiple weekly exposures of confidential client material.

Samsung’s aftermath carries its own lesson. After two years of internal tool development, Samsung reinstated ChatGPT access in May 2025 with new security protocols, character limits on entries, enhanced monitoring, and a blanket exclusion for teams handling sensitive product development. The company’s conclusion matched the CISA pattern: you cannot ban the technology forever, but you cannot trust individual judgment to protect sensitive information. You build architecture.

The behavioral economics explain the failure mode. Public AI tools create a convenience gradient that overwhelms institutional controls. The approved tool requires extra steps. The public tool sits in a browser tab. The approved tool may have limited functionality. The public tool accepts any input. The approved tool enforces guardrails. The public tool says yes to everything. Gottumukkala had DHSChat available. He sought a special exception to use ChatGPT instead. Attorneys with enterprise AI platforms face identical temptation every time they open a browser.

The Ethics Framework: Rules That Already Apply

Attorneys do not need new rules to understand the obligation. The existing framework addresses this scenario with precision, and courts across the country have begun enforcing the standards with real consequences.

Start with confidentiality. Model Rule 1.6, Comment 18 requires attorneys to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Uploading client information to a public AI platform, where the operator’s terms permit data retention and potential use in model training, fails this standard on its face. ABA Formal Opinion 477R reinforces the point: attorneys must take “reasonable efforts” to secure client communications, and transmitting privileged information through a platform that may retain and redistribute content through model training is difficult to characterize as reasonable under any construction of the standard.

Now add competence. Forty-two states have adopted Model Rule 1.1, Comment 8, requiring attorneys to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” The CISA incident demonstrates that competence means understanding how data flows through AI systems, where inputs travel once submitted, and what commercial terms govern retention. Knowing how to write a prompt is not competence. Knowing what happens to the prompt after you hit enter is.

The privilege implications cut deepest. Uploading privileged communications to a public AI platform may constitute voluntary disclosure to a third party, potentially waiving attorney-client privilege entirely. Sam Altman acknowledged in July 2025 that OpenAI has not “figured out” how to handle legal privilege and confidentiality. That admission came from the CEO of the company whose product 79% of attorneys report using. An attorney who reads that statement and continues uploading client data to the same platform has a difficult argument that the resulting disclosure qualifies as inadvertent.

The Counterargument: Controls Worked as Designed

A fair reading of the CISA incident acknowledges what went right. The automated DLP sensors detected the uploads. The system generated alerts. CISA’s chief counsel met with Gottumukkala to reinforce proper handling procedures. The agency confirmed that access to ChatGPT remains blocked by default unless an exception is granted. Andrew Gamino-Cheong of Trustible, an AI governance firm, noted that “catching that, and having the organizational processes to address it, is a sign of very high AI governance maturity.”

The FOUO designation falls below classified, and no evidence suggests the disclosure caused measurable operational harm. DHS spokesperson Marci McCarthy characterized the usage as “short-term and limited” and confirmed it occurred under an authorized temporary exception. These facts reduce the severity of this specific incident.

The counterargument has force but misses the critical point. Detection occurred after the data left federal networks. Alerts triggered after the information entered OpenAI’s systems. The meeting with the chief counsel happened after at least four documents had already traveled to a commercial platform serving hundreds of millions of users. In cybersecurity, detection without prevention is a postmortem, not a defense. A law firm that discovers a privilege violation after client data enters ChatGPT’s training pipeline has identified the problem. The firm has not solved it.

And the controls will not scale. They caught a leader with a doctorate, a CIO background, and security staff surrounding him. They will not catch a solo practitioner working at midnight from a laptop without enterprise monitoring tools.

Four Governance Lessons for Law Firms

Exception pathways are where controls fail. CISA blocked ChatGPT for all employees. Gottumukkala received a special exception. The breach occurred through the exception. Law firms that create blanket AI policies but grant partner-level carve-outs replicate this exact vulnerability. If the policy matters, it applies to everyone. If it doesn’t apply to everyone, it doesn’t matter.

Leadership bypasses destroy compliance culture. As one cybersecurity analyst told CSO Online: “Leaders set behavioral norms. Deviations undermine compliance culture and weaken credibility when advising other agencies.” A managing partner who uses public ChatGPT for client work while the firm’s policy prohibits it sends a signal that radiates through every associate, paralegal, and legal assistant in the organization.

Approved alternatives must exist before bans take effect. DHS had DHSChat. The approved tool sat on the network. Gottumukkala chose ChatGPT anyway, but the existence of DHSChat meant the agency could enforce its policy without eliminating AI access entirely. Law firms that ban public AI tools without providing vetted alternatives drive the behavior underground. Cyberhaven’s data confirms it: employees circumvent bans when the productivity benefit exceeds the perceived risk.

Detection without prevention is insufficient. CISA’s sensors caught the uploads after the data left. A law firm that monitors AI tool usage but cannot technically prevent uploads to unauthorized platforms has installed a smoke detector without a sprinkler system. The technology exists to prevent disclosure: endpoint data loss prevention tools like Microsoft Purview or Forcepoint can block uploads of client-identifiable content to unauthorized domains at the network level. DNS-level filtering can blacklist consumer AI platforms on firm devices entirely. Architecture must prevent the disclosure, not merely document it.

Practice-Specific Implications

Corporate M&A and Securities: Due diligence documents, merger agreements, and material nonpublic information carry regulatory exposure under SEC Rule 10b-5 if they reach unauthorized third parties. An associate uploading a draft acquisition agreement to ChatGPT for formatting creates the same data flow Gottumukkala triggered with contracting documents, except with securities law consequences attached.

Litigation: Work product doctrine protects attorney mental impressions, conclusions, and legal theories under Federal Rule of Civil Procedure 26(b)(3). Uploading litigation strategy memos to a public AI platform for analysis constitutes voluntary disclosure to a third party outside the privilege umbrella. Opposing counsel’s discovery request asking whether the firm used AI tools on the matter creates immediate exposure.

Criminal Defense: The Sixth Amendment right to effective assistance of counsel incorporates the confidentiality of attorney-client communications. Defense counsel who uploads client statements, plea negotiation strategies, or witness interview summaries to a public AI platform risks Strickland claims that the disclosure prejudiced the defense.

What You Can Do Tomorrow

Audit your AI access controls. Identify every AI tool accessible from firm devices. Determine which tools retain user inputs for training. Document which tools operate under enterprise agreements with no-training clauses and data processing agreements. Deploy DNS-level filtering (such as Cisco Umbrella or Cloudflare Gateway) to block consumer AI platforms on firm networks. Endpoint DLP tools like Microsoft Purview can flag or prevent uploads of content matching client-identifiable patterns. Block unauthorized tools at the network level, not by policy memo.

Eliminate exception pathways. If the firm’s AI policy prohibits uploading client data to public platforms, the policy applies to the managing partner, the senior litigator, and the summer associate equally. The CISA incident proves that seniority correlates with confidence, not with data handling discipline.

Deploy approved alternatives. Require enterprise AI tools with contractual no-training provisions before permitting any AI use on client matters. Include AI use provisions in engagement letters. Obtain informed client consent and document it. The technology exists to provide AI capabilities without sending client data to public platforms.

Update engagement letters and intake procedures. Address AI tool use explicitly. Specify which tools the firm uses, how client data flows through those tools, what contractual protections exist with AI vendors, and whether client consent covers AI-assisted work. Silence on AI use in engagement letters will become a liability as malpractice carriers begin auditing technology practices.

The Kicker

Gottumukkala’s automated security systems caught him. The alerts fired. The chief counsel scheduled a meeting. DHS conducted a review. Every piece of the institutional machinery worked exactly as designed. None of it changed the fundamental exposure. The data had already crossed the perimeter. Four documents marked for restricted handling sat on OpenAI’s servers, available to a system serving 700 million users, beyond the reach of any federal recall authority.

The person entrusted with defending America’s civilian cyber infrastructure could not resist the convenience of a consumer AI tool sitting in a browser tab. He had the training, the credentials, the staff, and the institutional knowledge. He had an approved alternative one click away. He chose the public platform anyway.

Your associates, your paralegals, and your contract attorneys face the same choice every day. They do not have a doctorate in information systems. They do not have automated DLP sensors monitoring their keystrokes. They do not have a chief counsel waiting to schedule a corrective meeting. They have a browser, a deadline, and a client file that needs work at 11 PM on a Tuesday.

If the architecture does not prevent the upload, the policy will not either. Gottumukkala proved it.

This blog provides general information for educational purposes only and does not constitute legal advice. Consult qualified counsel for advice on specific situations.

About the Author

JD Morris is Co-Founder and COO of LexAxiom. With over 20 years of enterprise technology experience and credentials including an MLS from Texas A&M, MEng from George Washington University, and dual MBAs from Columbia Business School and Berkeley Haas, JD focuses on the intersection of legal technology, cybersecurity, and professional responsibility.

Connect: LinkedIn | X | Bluesky

References

[1] Belanger, Ashley. “US Cyber Defense Chief Accidentally Uploaded Secret Government Info to ChatGPT.” Ars Technica, January 29, 2026.

[2] Politico. “CISA Acting Director Uploaded Sensitive Docs to ChatGPT.” January 28, 2026.

[3] CSO Online. “CISA Chief Uploaded Sensitive Government Files to Public ChatGPT.” February 2026.

[4] DHS Management Directive 11042.1. “Safeguarding Sensitive But Unclassified (For Official Use Only) Information.” January 6, 2005.

[5] DoD Directive 5400.7. “DoD Freedom of Information Act (FOIA) Program.”

[6] 32 C.F.R. Part 518. “For Official Use Only: Unauthorized Disclosure.”

[7] 5 U.S.C. § 552a. Privacy Act of 1974.

[8] CyberScoop. “Lawmakers Probe CISA Leader over Staffing Decisions.” January 22, 2026.

[9] Cybersecurity Dive. “Acting CISA Chief Defends Workforce Cuts, Declares Agency ‘Back on Mission.'” January 22, 2026.

[10] Federal News Network. “Lawmakers Press Acting CISA Director on Workforce Reductions.” January 22, 2026.

[11] Politico. “Acting CISA Director Failed a Polygraph. Career Staff Are Now Under Investigation.” December 21, 2025.

[12] Fox News. “DHS Disputes Reports That Acting CISA Director Madhu Gottumukkala Failed a Polygraph.” December 22, 2025.

[13] The Cyber Express. “CISA Director Polygraph Test Fallout Spurs DHS Investigation.” December 22, 2025.

[14] Gizmodo. “Oops: Samsung Employees Leaked Confidential Data to ChatGPT.” April 6, 2023.

[15] Dark Reading. “Samsung Engineers Feed Sensitive Data to ChatGPT, Sparking Workplace AI Warnings.” April 2023.

[16] SamMobile. “Samsung Lets Employees Use ChatGPT Again After Secret Data Leak in 2023.” May 1, 2025.

[17] Cyberhaven Labs. “2025 AI Adoption and Risk Report.” April 23, 2025. Based on AI usage patterns of 7 million workers.

[18] ABA Model Rule 1.6: Confidentiality of Information, Comment 18.

[19] ABA Model Rule 1.1: Competence, Comment 8.

[20] ABA Formal Opinion 477R. “Securing Communication of Protected Client Information.” May 11, 2017.

[21] TechCrunch. “Trump’s Acting Cybersecurity Chief Uploaded Sensitive Government Docs to ChatGPT.” January 28, 2026.

[22] BankInfoSecurity. “AI Use by CISA Chief Alarms Cyber Officials.” January 2026.

[23] Government Executive. “Democrats Press CISA’s Acting Chief over Major Staffing Cuts.” January 22, 2026.

[24] MeriTalk. “CISA Chief Denies Polygraph Claims, Faces Workforce Scrutiny.” January 21, 2026.

[25] Morris, JD. “AI Won’t Take Your Job. The Attorney Who Uses It Better Will. Part 2: The Ten Traps.” Morris Legal Technology Blog.

[26] Morris, JD. “The Six-Week Silence: When Law Firms Delay Breach Disclosure.” Morris Legal Technology Blog.

[27] Morris, JD. “Why Hackers Target Law Firms.” Morris Legal Technology Blog.

[28] CyberPress. “CISA Chief Accidentally Uploads Sensitive Government Documents to Public ChatGPT.” January 2026.

[29] Government Executive. “CISA’s Acting Chief Says 70 Staff Were Reassigned to Other DHS Offices in Last Year.” February 2026.

Leave a Reply

Discover more from The Technology Blind Spot

Subscribe now to keep reading and get access to the full archive.

Continue reading