27 min read

If Your Firm Uses Claude, Your Government Contracts Are Now at Risk

AICybersecurityLaw Firm SecuritySupply Chain

Ultra Vires, Retaliation, and the Administration That Won’t Listen to Its Own Lawyers

THE TECHNOLOGY BLIND SPOT – JD Morris | March 2026

On the afternoon of February 27, 2026, Undersecretary of Defense Emil Michael was on the phone with Anthropic executives, offering a deal. At the same moment, Defense Secretary Pete Hegseth posted on X that Anthropic had been designated a supply chain risk to national security. The Pentagon’s own negotiator did not know the Pentagon’s own secretary had already pulled the trigger.

If you are an attorney at a firm that uses Claude, Anthropic’s AI model, and your firm holds any federal contract, subcontract, or grant with Pentagon exposure, that post just created an obligation you need to assess. Hegseth declared that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” Effective immediately.

Read that again. Not “may not use Claude on defense work.” Any commercial activity. If your firm uses Claude for legal research on a personal injury case and also holds a subcontract with a defense contractor, Hegseth’s declaration purports to cover you.

The good news: the declaration almost certainly exceeds his statutory authority. The bad news: the chilling effect does not wait for a court to say so.

The Direct Answer

Anthropic’s designation as a supply chain risk is legally unprecedented, procedurally deficient, and substantively unsupportable under the statutes the government invoked. Every prior public use of this authority targeted foreign companies with documented espionage obligations to adversarial governments. Anthropic is an American company that refused to remove two safety guardrails from an AI model that its competitor OpenAI subsequently embedded in its own Pentagon contract. The secondary boycott provision, which purports to bar all commercial activity with Anthropic by any Pentagon-affiliated entity, exceeds anything Congress authorized under 10 U.S.C. § 3252 or FASCSA. The designation should be challenged and overturned, and the administration’s retaliatory rhetoric may independently support claims for tortious interference and reputational harm.

For law firms using Claude: if you hold federal contracts or subcontracts with defense exposure, conduct a vendor assessment this week. Not because the secondary boycott is enforceable. Because your clients will ask, and you need an answer before they do.

Ten Thousand Lawyers and They Chose This

The Department of Justice employs approximately 10,000 attorneys. The Department of Defense Office of General Counsel is the largest law office in the world. The executive branch has more legal talent at its disposal than any entity on the planet.

So why does this administration keep choosing illegal methodologies to accomplish objectives it could achieve through lawful means?

The Pentagon wanted unrestricted use of Claude. The government’s strongest argument deserves its strongest form: military operations require technology that works without contractual limitations when lives are at stake. A commander planning a time-sensitive strike cannot pause to consult a vendor’s acceptable use policy. Operational flexibility is not a bureaucratic preference. It is a warfighting necessity. The Pentagon’s position that it already operates under legal constraints, that federal law prohibits mass domestic surveillance and internal policy restricts autonomous weapons, is not unreasonable. If those constraints exist in law, the argument goes, contractual duplication is redundant at best and operationally dangerous at worst. Emil Michael made this case directly: “At some level, you have to trust your military to do the right thing.”

That argument is valid as far as it extends. It does not extend far enough. Laws can be repealed. Internal policies can be revised by memorandum. Executive orders can be rescinded overnight. Anthropic’s contractual guardrails existed precisely because the legal and policy constraints the Pentagon cited are not permanent. They are political choices that change with administrations. A contractual prohibition survives an election. A policy directive does not. More fundamentally, the operational flexibility argument does not justify the remedy chosen. The government had lawful tools available to address the contract dispute: renegotiate terms, terminate for convenience under standard FAR clauses, pursue a directed procurement with a different vendor, or invoke the Defense Production Act if it could demonstrate genuine national security necessity. Each option has procedural requirements. Each has legal constraints. Each provides the company with notice and an opportunity to respond.

Instead, the administration chose a supply chain risk designation, an authority Congress created to protect military systems from foreign adversary sabotage, and aimed it at a domestic company over a contract dispute. It skipped the required risk assessment. It appears to have skipped congressional notification. It imposed the designation three days after the secretary met with Anthropic’s CEO, leaving no time for the deliberative process the statute contemplates. And it did so while a senior Pentagon official was simultaneously still negotiating terms on the phone.

This is not the first time this administration has reached for extra-legal mechanisms when lawful alternatives exist. But for attorneys advising technology clients and government contractors, this instance crystallizes the pattern. The executive branch has lawyers. It has process. It has statutory authority tailored to virtually every procurement scenario imaginable. When it bypasses all of that to bully a vendor through a social media post, the question for every attorney in the room is not whether the action is lawful. The question is why an administration with ten thousand lawyers concluded that lawfulness was optional.

Word Still Cannot Spell After Forty Years

Microsoft Word shipped in 1983. Forty-three years later, it still underlines correctly spelled words, suggests replacements no native speaker would recognize, and turns “its” into “it’s” at random. The autocorrect function, after four decades of development by the most valuable company on earth, cannot reliably distinguish a possessive from a contraction.

Generative AI has existed in its current form for approximately three years. No major model has exited beta in any meaningful sense. As I documented in “AI Won’t Take Your Job: The Competence Obligation,” a Stanford RegLab study found that leading legal AI tools hallucinate in one out of every six queries. ABA Formal Opinion 512 requires attorneys to independently verify every AI-generated output before submitting it to a court. The ABA did not impose that requirement because the technology is ready for unsupervised deployment. It imposed it because the technology is not.

Anthropic’s two guardrails reflected this engineering reality: do not use our model for mass domestic surveillance of Americans, and do not use it to fire weapons without human involvement. Dario Amodei told CBS News that “there are things the technology just isn’t ready for.” Every credible AI researcher has said the same thing for three years. OpenAI said the same thing the same week, in the same contract, with the same Pentagon. Musk’s xAI accepted every Pentagon demand without restriction and is slated to deploy Grok on classified networks.

Word cannot spell. AI cannot reliably distinguish real case law from fabricated citations. And the Pentagon wants to hand it autonomous lethal authority. Anthropic said that was premature. For that, it got treated like Huawei.

Anthropic Is Not Huawei: The Comparison That Should Embarrass the Administration

Every prior high-profile supply chain risk designation targeted a foreign company with documented ties to an adversarial government.

Huawei: substantial ties to the Chinese military, obligations under Chinese law to cooperate with intelligence requests, two decades of documented cybersecurity concerns from U.S. intelligence agencies. ZTE: designated on identical grounds. Kaspersky Lab: removed from federal systems after confirmation that Russian law compels cooperation with the FSB. ByteDance: executive orders based on obligations under Article 7 of China’s National Intelligence Law, which requires all organizations and citizens to support, assist, and cooperate with national intelligence work. DJI: Entity List placement on national security grounds.

Anthropic is headquartered in San Francisco. No foreign government ownership. No foreign intelligence obligation. No accusation of espionage, sabotage, or malicious code introduction. According to Lawfare’s analysis, the company’s national security track record runs in the opposite direction: first frontier AI firm to deploy on classified networks, cut off CCP-linked firms at a cost of hundreds of millions in revenue, and shut down CCP-sponsored cyberattacks that attempted to abuse Claude.

Placing Anthropic on the same list as Huawei is not a policy decision. It is commercially toxic by association. It tells every American technology company that negotiating contract terms with the federal government carries the same legal risk as operating as an arm of a hostile foreign intelligence service.

What an Actual Supply Chain Compromise Looks Like

As I documented in “The Backdoor to Your Client’s Inbox” and “I Was Inside EMC When Hackers Stole the Keys to 40 Million Doors,” Salt Typhoon compromised at least nine major U.S. telecom carriers and accessed systems handling court-authorized wiretaps. The breach persisted for two years. Senate testimony in December 2025 confirmed the compromised carriers had not proven the hackers fully left their networks. Professor Matt Blaze traced the vulnerability to CALEA infrastructure that Congress required companies to build.

Consider the counterfactual. If the United States had deployed Huawei networking equipment across its telecommunications backbone, as Huawei aggressively marketed throughout the 2010s, Salt Typhoon would not have needed to hack anything. Article 7 of China’s National Intelligence Law would have obligated Huawei to provide access. The attack surface would not have been a vulnerability to exploit. It would have been a door with a key held in Beijing.

That is what a supply chain risk looks like. Foreign ownership. Foreign government control. Legal obligation to compromise the systems you build. Anthropic is an American company that told the American military its technology is not mature enough for certain applications. The distance between those two facts is the distance between the statute’s purpose and its abuse.

The Contradiction the Government Cannot Survive in Court

The government held two positions in the same week. On Monday, it threatened to invoke the Defense Production Act to compel Anthropic to provide Claude, on the theory that the technology was too essential to national defense to forgo. By Friday, it declared Anthropic a supply chain risk too dangerous to use.

Both characterizations cannot be true. The R Street Institute’s analysis captured the logic precisely: DPA invocation requires that the technology be indispensable to national defense. Supply chain risk designation requires that it pose a direct threat. The administration held both positions simultaneously, which means neither resulted from a genuine national security assessment.

It gets worse. Hegseth declared the designation required emergency exclusion and then approved a six-month transition period during which Claude remains integrated in classified military networks. Reports indicate U.S. strikes in Iran used Anthropic’s technology hours after Trump announced the ban. The government cannot coherently argue a vendor poses an acute supply chain threat while continuing to use that vendor for active combat operations.

And the kill shot for litigation: hours after designating Anthropic, the Pentagon accepted a deal from OpenAI containing the identical red lines. No mass surveillance. No autonomous weapons. OpenAI added a third prohibition on high-stakes automated decision-making. The government accepted from a competitor the exact restrictions it declared grounds for treating Anthropic like a Chinese espionage front.

Altman admitted the deal was “definitely rushed” and “looked opportunistic and sloppy.” MIT Technology Review concluded that OpenAI’s contract relies on an “all lawful use” clause with references to existing law, not the explicit contractual prohibitions Anthropic sought. Whether that approach provides genuine protection is debatable. What is not debatable: the government punished Anthropic for a position it simultaneously accepted from OpenAI.

Ultra Vires, Retaliation, and Direct Bullying

The administration’s public statements removed any pretense that this was a deliberative national security determination.

President Trump: “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.”

Secretary Hegseth: Anthropic delivered “a master class in arrogance and betrayal.” Its position is “fundamentally incompatible with American principles.”

Undersecretary Michael: Amodei is a “liar” with a “God complex” who is “ok putting our nation’s safety at risk.”

A senior Pentagon official to Axios: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand.”

That last quote is the one that should matter most to every attorney reading this. “Make sure they pay a price.” Not “protect national security.” Not “mitigate a genuine risk.” Pay a price. For refusing contract terms.

Elon Musk, whose xAI accepted the Pentagon’s terms without restriction, posted on X that “Anthropic hates Western Civilization.” Treasury Secretary Scott Bessent announced publicly that his department was terminating all Anthropic products. The Department of Health and Human Services directed employees to switch to ChatGPT and Gemini. The market responded to the bullying by downloading Claude to the number one position on Apple’s App Store, overtaking ChatGPT for the first time.

The industry responded within hours. 573 Google employees and 93 OpenAI employees signed an open letter titled “We Will Not Be Divided,” urging their companies to stand with Anthropic. A separate letter to the Pentagon and Congress, signed by 121 technology leaders from OpenAI, Slack, IBM, Cursor, and Salesforce Ventures, stated: the federal government should not retaliate against a private company for declining to accept changes to a contract. OpenAI researcher Boaz Barak called mass surveillance his own “personal red line.” Google DeepMind Chief Scientist Jeff Dean called it a violation of the Fourth Amendment.

Strip away the rhetoric and evaluate the factual record. Anthropic did not “strong-arm” anyone. It negotiated contract terms, as every government vendor does. It did not “betray” the military. It declined to remove safety restrictions that existed in its acceptable use policy when the Pentagon signed the original $200 million contract in July 2025. Amodei did not “lie.” He maintained a consistent engineering assessment shared by researchers across the industry, including at OpenAI, whose CEO told his own employees they had the same red lines.

Government officials enjoy qualified immunity for statements made within the scope of official duties. That immunity does not extend to ultra vires conduct. When officials exceed their statutory authority and make false statements designed to destroy a company’s commercial relationships, those statements can support claims for tortious interference and injurious falsehood. The Westfall Act does not shield federal employees whose actions fall outside the scope of their lawful authority. And when the explicit, stated purpose of government action is to “make sure they pay a price” for exercising a contractual right, First Amendment retaliation doctrine applies.

The Pattern Every Technology Company Should Recognize

On March 3, 2026, the same week the Anthropic designation was announced, the Department of Justice dropped its defense of executive orders targeting four of the nation’s most prominent law firms: Perkins Coie, WilmerHale, Jenner & Block, and Susman Godfrey. Four different federal judges had found those orders unconstitutional. Judge Beryl Howell called the Perkins Coie order “an unprecedented attack” on the legal system. Judge Loren AliKhan said the Susman Godfrey order was “based on a personal vendetta” and called it a “shocking abuse of power.”

The administration abandoned those appeals. But not before extracting roughly $940 million in free legal services from nine other firms that cut deals rather than fight. Seven of those nine firms had no executive order issued against them. The threat alone was enough. Paul Weiss agreed to $40 million in pro bono work on causes the administration supports. Skadden pledged $100 million and triggered a revolt among its own associates, generating an open letter signed by over 1,500 lawyers across the profession.

The pattern is identical to the Anthropic situation. Target a company for exercising a legal right. Impose commercially devastating consequences through executive fiat. Use the example to coerce others into compliance. When courts overturn the action, the damage is already done. As one legal scholar at William and Mary observed, the administration does not appear to care whether the orders survive judicial review. The point is the chill.

NPR documented over 100 perceived enemies targeted through government powers in the administration’s first 100 days, deploying the Departments of Justice, Defense, Homeland Security, Education, Health and Human Services, the IRS, the GSA, the FCC, the ODNI, and the EEOC as instruments of retaliation. The Associated Press was banned from the Oval Office. A federal judge found the ban violated the First Amendment. The administration appealed.

Anthropic is not the first target. It will not be the last. Every technology company that does business with the federal government should understand what the law firm experience teaches: there is no safety in appeasement. The firms that fought won. The firms that capitulated paid.

OpenAI’s Faustian Bargain

Sam Altman told CNBC the deal was “definitely rushed” and “looked opportunistic and sloppy.” He admitted that OpenAI shares Anthropic’s red lines. He said publicly, “For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety.”

Then he signed the contract anyway. Hours later, Anthropic’s Claude overtook ChatGPT at the top of Apple’s App Store.

OpenAI published the contract language. It permits the Pentagon to use its AI for “all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” The company claims three red lines: no mass surveillance, no autonomous weapons, no high-stakes automated decisions like social credit systems. But the surveillance protection references Executive Order 12333, the Reagan-era directive governing how U.S. intelligence agencies collect information, and prohibits only “unconstrained monitoring” of Americans. As Techdirt’s Mike Masnick documented, EO 12333 is the legal framework the NSA used for decades to conduct bulk domestic metadata collection. “Unconstrained” implies a constrained version of mass surveillance would be permissible. Anthropic wanted a binding prohibition on commercially available data, the geolocation records and browsing histories that data brokers sell about Americans. The Pentagon refused. OpenAI’s contract is silent on it.

The backlash was immediate. OpenAI’s own researcher Noam Brown, architect of the prior year’s reasoning model breakthrough, publicly objected before safeguards were added. By March 2, OpenAI revised the contract to add explicit Fourth Amendment protections. The revision confirmed what Anthropic argued from the start: the original terms were insufficient. The question for every technology company watching is what happens when those revised terms meet operational pressure.

Because the failure scenarios are not hypothetical.

In 2003, a U.S. Patriot missile battery operating in automatic mode misidentified a British Tornado aircraft as an incoming Iraqi anti-radiation missile. The system fired. Flight Lieutenant Kevin Main and Flight Lieutenant Dave Williams died. The Patriot’s targeting algorithm could not distinguish an allied aircraft from an enemy weapon. After the incident, the U.S. Army changed doctrine: air threats can now only be engaged in manual mode “to reduce the risk of fratricide.” Twenty years later, the Pentagon is pushing to remove the manual requirement from AI systems.

In August 2021, a U.S. drone strike in Kabul killed Zemari Ahmadi and nine members of his family, including seven children. The military’s AI-assisted surveillance had flagged Ahmadi, an aid worker loading water containers into his car, as an ISIS-K operative. The targeting algorithm matched pattern-of-life indicators that turned out to be a man doing his job. General Mark Milley initially called the strike “righteous.” The Pentagon later admitted it was a tragic mistake.

The week the Pentagon designated Anthropic a supply chain risk, Professor Kenneth Payne of King’s College London published a study pitting GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash against each other in 21 nuclear crisis simulations. Across 329 turns of play, the models deployed tactical nuclear weapons in 95% of games. No model, in any game, ever chose surrender or accommodation. Eight de-escalatory options were on the menu. No model selected any of them. When losing, they escalated or died trying. Claude recommended nuclear strikes in 86% of its games, the highest rate among the three models. Payne’s assessment: the models showed “little sense of horror or revulsion at the prospect of all-out nuclear war.” The nuclear taboo that has constrained human decision-makers since 1945 does not appear to exist in frontier AI systems. Anthropic built the model. Anthropic knows what it does under pressure. That is why Anthropic said “not yet.” The Pentagon argues that operational flexibility requires removing contractual guardrails. The simulation data shows what flexibility without guardrails produces: escalation that never reverses.

The International Committee of the Red Cross has documented that AI targeting systems can misclassify individuals as combatants based on connections as tangential as having attended the same school or sharing a mutual contact. Researchers have demonstrated that adversarial stickers on stop signs can trick object-recognition systems into classifying them as speed limit signs. In military contexts, this vulnerability means adversaries could manipulate autonomous systems into targeting civilian infrastructure, friendly forces, or, yes, a school or a foreign embassy.

When that happens, and the engineering consensus is not whether but when, the company whose model made the decision will face a political firestorm that makes the current Anthropic dispute look like a contract negotiation. Which it is. The administration that punished Anthropic for saying “not yet” will not protect the company that said “yes” when the consequences arrive. Ask Mike Pence. Ask Bill Barr. Ask James Mattis, Mark Esper, Jeff Sessions, Rex Tillerson, or any of the twenty-four former Trump aides and allies CNN documented who were embraced and then discarded when they became inconvenient.

Martin Luther King Jr. identified the dynamic sixty years ago: “Cowardice asks the question, ‘Is it safe?’ Expediency asks the question, ‘Is it politic?’ Vanity asks the question, ‘Is it popular?’ But conscience asks the question, ‘Is it right?’ And there comes a time when one must take a position that is neither safe, nor politic, nor popular, but one must take it because it is right.”

Anthropic took that position. The rest of the industry can race to fill the vacuum, or it can recognize that the precedent being set applies to every company that will ever negotiate a federal contract. The firms that stood up to the law firm executive orders won in court and forced the DOJ to abandon its appeals. The firms that capitulated are doing free legal work for the administration that threatened them. History will remember which side of that line each company chose.

King also said: “In the end, we will remember not the words of our enemies, but the silence of our friends.”

The Uniform Code of Military Justice Problem Nobody Is Discussing

Pete Hegseth was commissioned as an infantry officer in the Army National Guard. He rose to the rank of major. He deployed to Guantánamo Bay, Iraq, and Afghanistan. He knows the Uniform Code of Military Justice. He should know what it prohibits.

In January 2026, Hegseth censured Senator Mark Kelly, a retired Navy captain, for participating in a video calling on American troops to refuse unlawful orders. Hegseth characterized the video as seditious and conduct unbecoming an officer under Article 133, UCMJ. He initiated proceedings to reduce Kelly’s retired rank and pension. Eugene Fidell, senior research scholar in military law at Yale Law School, called the claim “ludicrous.” Military law experts noted that if Kelly had actually committed sedition, the Pentagon would have court-martialed him. They censured him instead because, as one expert explained, “you don’t give a letter of censure to someone who is committing sedition.” Kelly filed suit. The Pentagon is appealing a judge’s order blocking the punishment.

The irony requires no interpretation. The same secretary who invoked Article 133 to punish a retired officer for telling troops to refuse unlawful orders then issued a supply chain risk designation that multiple legal scholars have called unlawful. The question the UCMJ raises cuts in both directions.

Article 92: the lawful order requirement. UCMJ Article 92 prohibits failure to obey lawful general orders. The word “lawful” is not decoration. It is a jurisdictional requirement. An order that exceeds statutory authority is not a lawful order. Officers who execute procurement actions they know to be ultra vires face potential liability under Article 92. The defense that “I was following orders” does not survive UCMJ scrutiny when the order itself is unlawful. Every JAG officer in the building knows this. The question is whether anyone said it out loud before the designation posted on X.

Article 133: conduct unbecoming. Officers who participate in what courts later determine to be retaliatory action against a civilian company for exercising contractual rights face exposure under Article 133. The Manual for Courts-Martial defines conduct unbecoming to include acts of “injustice” and “unfair dealing.” Publicly characterizing a vendor’s contract negotiation as a national security threat, when six independent legal analyses conclude the designation exceeds statutory authority, raises exactly the question Article 133 exists to address.

For senior officials with retained military status: Article 133 applies to commissioned officers. Article 134, the general article, prohibits conduct prejudicial to good order and discipline or conduct that brings discredit upon the armed forces. Publicly calling a vendor’s CEO a “liar” with a “God complex,” characterizing a company’s contractual negotiation as “a master class in arrogance and betrayal,” and issuing a designation that four separate legal analyses have called ultra vires raises questions that military lawyers recognize even if political appointees do not. Whether any officer with retained UCMJ jurisdiction participated in the designation decision is a question discovery would answer.

The Department of Defense is the largest employer of attorneys in the world. Its Office of General Counsel employs thousands of lawyers whose specific function is to ensure that military actions comply with law. Somewhere in that building, JAG officers reviewed the supply chain risk designation before it was announced. Or they did not, which is worse. Either military lawyers approved an action that the Lawfare Institute, Just Security, R Street, Mayer Brown, the Brennan Center, and every independent legal analysis published to date has concluded exceeds statutory authority, or the designation was issued without legal review. Both outcomes implicate the UCMJ’s requirements for lawful conduct by commissioned officers.

Hegseth told the Senate during his confirmation that “we want lawyers who give sound constitutional advice” rather than “roadblocks to anything.” The UCMJ exists precisely because the military’s lawyers are supposed to be roadblocks. Roadblocks to unlawful orders. Roadblocks to ultra vires action. Roadblocks to the abuse of military procurement authority for political retaliation. When the secretary of defense says he does not want roadblocks, the commissioned officers who serve under him should hear Article 133 very clearly.

Why Anthropic Should Litigate Aggressively and Seek Damages

Anthropic has stated publicly that it will challenge the designation in court. Based on the public record, it should go further.

Ultra vires secondary boycott. Hegseth’s declaration that no contractor may conduct “any commercial activity” with Anthropic extends far beyond Section 3252’s authorization. The statute permits exclusion from specific covered defense information technology systems. Congress did not authorize a blanket prohibition on all commercial relationships. As Anthropic itself noted, and as Mayer Brown’s analysis confirmed, FASCSA orders apply only to contractors’ federal work, not their commercial activities. Even Just Security concluded Congress “hasn’t given the Secretary, or any other officials, the power to dictate with whom defense contractors can do ‘any’ business.” One commentator called it “attempted corporate murder.”

Procedural failures. Section 3252 requires a risk assessment, congressional notification, and a finding that less intrusive alternatives were exhausted. Three days from the Hegseth-Amodei meeting to designation leaves no time for statutory deliberation. Legal analysts at Fortune, Lawfare, and Mayer Brown all noted these requirements appear unmet. The Brennan Center’s Amos Toh specifically questioned whether the Pentagon could claim a good faith effort to pursue less intrusive measures given the speed of escalation.

The indispensable-or-dangerous contradiction. The DPA threat and the supply chain designation are logically incompatible. The government argued in the same week that Claude is too essential to national defense to forgo and too dangerous to national security to permit. A court will notice.

Constitutional claims. Under Webster v. Doe (1988), constitutional claims survive broad judicial review bars unless Congress clearly precludes them. Section 3252 does not. Anthropic has viable Due Process and First Amendment claims. The retaliation theory is particularly strong: the government punished Anthropic for declining contract terms, and the administration’s own public statements frame the designation as punishment, not security.

Tortious interference and reputational damages. Labeling Anthropic’s leadership “liars” and “Leftwing nut jobs”while imposing a commercially devastating designation causes concrete, quantifiable harm. Eight of the ten largest U.S. companies use Claude. Anthropic’s IPO trajectory, valued at approximately $380 billion, faces regulatory uncertainty. Defense contractors must certify non-use. Enterprise customers with Pentagon exposure are evaluating alternatives. These are not speculative damages.

Punitive damages. The stated intent to “make sure they pay a price” is evidence of punitive purpose, not legitimate regulatory purpose. If Anthropic can demonstrate the designation was undertaken with knowledge that it exceeded statutory authority and with specific intent to inflict commercial harm as punishment for protected conduct, punitive damages are viable. The public record is already building that case. Tess Bridgeman at Just Security called the designation “attempted corporate murder.” The description is legally imprecise but commercially accurate.

What This Means for Your Practice on Monday Morning

If your firm uses Claude and holds any federal contract, subcontract, or grant with defense exposure, you need to do three things this week.

First, assess your exposure. Identify every workflow where Claude touches work product related to federal contracts. Anthropic’s own statement clarifies that legally, the designation can only extend to use of Claude as part of Department of Defense contracts, not how contractors use Claude for other customers. That is almost certainly correct under the statute. But the chilling effect of Hegseth’s broader declaration will prompt compliance officers to ask questions. Have your answer ready.

Second, advise clients with Pentagon exposure. If you represent government contractors, they need to understand the scope of what was actually designated versus what was declared on social media. The gap between those two things is enormous. Mayer Brown’s analysis provides a useful framework: contractors should review SAM.gov for formal FASCSA orders, conduct reasonable inquiry into their supply chains, and prepare documentation justifying continued use if Claude is critical to performance. A waiver mechanism exists under FASCSA. Practical availability of that mechanism in this political environment is a different question.

Third, evaluate the precedent. Every technology company that negotiates with the federal government now faces a new risk: that contract terms accepted today can become the basis for punitive action tomorrow. As I documented in “17 Subprocessors Deep,” vendor risk assessment already requires diligence beyond the primary contract. This designation adds political risk to the matrix. The question is no longer just whether your vendor is secure. It is whether your vendor has refused a demand from an administration willing to weaponize procurement authorities designed for foreign adversaries.

Government contracts attorneys: Your client is a mid-size defense subcontractor. Their engineering team uses Claude for technical documentation on a $50 million SBIR deliverable. The contracting officer calls Thursday asking whether any Anthropic technology touches the project. Your client needs a FASCSA exposure audit before that call. Map every AI vendor in the supply chain against SAM.gov designations. Prepare waiver documentation under the FASCSA framework. The designation is almost certainly unlawful, but procurement debarment fights take years. Your client cannot wait for litigation to resolve what the contracting officer demands next month.

Corporate and M&A attorneys: Your client is three weeks from closing an acquisition of an AI analytics firm with DOD contracts. The target’s primary product runs on Claude. Political procurement risk is now part of due diligence. A vendor that refuses a government demand today could be designated tomorrow. A vendor that accepts every demand faces liability when the technology fails. Neither posture is safe. Both require contractual allocation of regulatory risk that did not exist six months ago.

Technology and IP attorneys: Advise every AI client to document government interactions with the specificity of litigation hold preservation. Anthropic’s ability to challenge this designation depends on contemporaneous records of what was demanded, refused, and threatened. The Axios reporting, the X posts, the CBS interviews exist because someone kept receipts. Your clients should assume every federal negotiation is now a potential exhibit.

If you advise technology companies that sell to the federal government, this is the engagement letter conversation you should be having right now. Not about AI policy. About procurement retaliation.

The executive branch employs more attorneys than most countries. It has statutory authority tailored to every procurement scenario Congress could anticipate. It has administrative processes, regulatory frameworks, and judicial review mechanisms designed to protect both national security and due process simultaneously.

It chose a social media post instead.

When asked to explain, a senior official said the point was to “make sure they pay a price.”

Four law firms stood up to this administration’s executive orders this year. Four federal judges found those orders unconstitutional. Today the DOJ abandoned every appeal. The firms that fought won. The firms that capitulated are still paying.

Microsoft Word shipped in 1983. After forty-three years, it still cannot spell. Generative AI has existed for three, and the Pentagon wants it to decide who lives and who dies without a human in the loop. The company that said “not yet” is now on the same list as Huawei. The company that said “yes” will learn soon enough what everyone who has said “yes”to this administration eventually learns.

The courts will fix the designation. The precedent it sets in the meantime is the part that should keep you awake. Because if “no” is a supply chain risk, then every technology company in America is one contract negotiation away from standing where Anthropic stands today. And as four law firms just proved, the only safe answer to a bully who demands capitulation is the one Anthropic gave: “No.”

This blog provides general information for educational purposes only and does not constitute legal advice. Consult qualified counsel for advice on specific situations.

About the Author

JD Morris is Co-Founder and COO of LexAxiom. With over 20 years of enterprise technology experience and credentials including an MLS from Texas A&M, MEng from George Washington University, and dual MBAs from Columbia Business School and Berkeley Haas, JD focuses on the intersection of legal technology, cybersecurity, and professional responsibility.

LinkedIn: www.linkedin.com/in/jdavidmorris | X: @JDMorris_LTech | Bluesky: @JDMorris-ltech.bsky.social

References

10 U.S.C. § 3252 (Supply Chain Risk Authorities, Department of Defense)

Federal Acquisition Supply Chain Security Act (FASCSA), 41 U.S.C. §§ 1321–1328, 4713

FAR 52.204-30 (FASCSA Contractor Obligations)

Defense Federal Acquisition Regulation Supplement, Subpart 239.73

Webster v. Doe, 486 U.S. 592 (1988) (Constitutional claims survive broad judicial review bars)

ABA Formal Opinion 512 (July 2024): Generative Artificial Intelligence Tools

ABA Model Rule 1.1, Comment 8 (Technology Competence)

National Intelligence Law of the People’s Republic of China, Article 7 (June 27, 2017)

FCC Designation of Huawei as National Security Threat, 85 Fed. Reg. 42,893 (July 14, 2020)

Kaspersky Lab BIS Final Determination, OICTS (2024)

FBI Cybersecurity Advisory, Salt Typhoon Telecom Infrastructure Breach (December 2024)

Professor Matt Blaze, Congressional Testimony on CALEA Vulnerability (April 2025)

Anthropic, Statement on Secretary of War Comments (February 27, 2026)

Lawfare, “Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System” (March 2, 2026)

Just Security, “What Hegseth’s Supply Chain Risk Designation Does and Doesn’t Mean” (March 2, 2026)

R Street Institute, “Anthropic, the Pentagon, and the AI Innovation Ecosystem” (March 3, 2026)

Mayer Brown, “Pentagon Designates Anthropic a Supply Chain Risk: What Contractors Need to Know” (March 1, 2026)

Willkie Farr & Gallagher, Anthropic Supply Chain Risk Designation Analysis (2026)

Fortune, “OpenAI Sweeps in to Snag Pentagon Contract” (February 28, 2026)

Axios, “Trump Moves to Blacklist Anthropic’s Claude” (February 27, 2026)

Axios, “Pentagon Threatens to Label Anthropic a Supply Chain Risk” (February 16, 2026)

CBS News, “Hegseth Declares Anthropic a Supply Chain Risk” (February 27, 2026)

CNN, “Trump Administration Orders Military Contractors to Cease Business with Anthropic” (February 27, 2026)

NPR, “OpenAI Announces Pentagon Deal After Trump Bans Anthropic” (February 27, 2026)

TechCrunch, “Tech Workers Urge DOD to Withdraw Anthropic Label” (March 2, 2026)

OpenAI, “Our Agreement with the Department of War” (February 28, 2026)

MIT Technology Review, “OpenAI’s Compromise with the Pentagon Is What Anthropic Feared” (March 2, 2026)

CNBC, “OpenAI’s Altman Admits Defense Deal Looked Opportunistic” (March 3, 2026)

Transformer News, “OpenAI’s Pentagon Red Lines Are a Mirage” (March 2, 2026)

Stanford RegLab/HAI, “AI on Trial: Legal Models Hallucinate in 1 out of 6 Queries” (2024)

NPR, “Trump Has Used Government Powers to Target More Than 100 Perceived Enemies” (April 29, 2025)

The Hill, “DOJ Drops Defense of Trump Orders Targeting Law Firms” (March 3, 2026)

CBS News, “Judge Finds Trump Executive Order Punishing Susman Godfrey Unconstitutional” (June 27, 2025)

NBC News, “Trump Administration Drops Suits Against Law Firms After Judges Find Orders Unconstitutional” (March 3, 2026)

NPR, “Judge Blocks Trump Executive Order Against Susman Godfrey Law Firm” (June 27, 2025)

First Amendment Encyclopedia, “Trump’s Executive Orders Against Law Firms” (2025)

PBS NewsHour, “Judge Blocks Trump Executive Order Targeting Perkins Coie” (May 2, 2025)

Kenneth Payne, “AI Arms and Influence: Frontier Models Exhibit Sophisticated Reasoning in Simulated Nuclear Crises,” arXiv preprint, King’s College London (February 2026)

Brookings Institution, “Understanding the Errors Introduced by Military AI Applications” (November 2022)

International Committee of the Red Cross, “The Risks and Inefficacies of AI Systems in Military Targeting Support” (September 2024)

Human Rights Watch, “A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making” (April 2025)

CNN, “Analysis: 24 Former Trump Allies and Aides Who Turned Against Him” (October 2023)

PBS NewsHour, “Trump Orders Federal Agencies to Stop Using Anthropic Tech” (February 27, 2026)

Martin Luther King Jr., Speech (1967), collected in The Autobiography of Martin Luther King Jr.

10 U.S.C. §§ 801–946a, Uniform Code of Military Justice

TechCrunch, “Tech workers urge DOD, Congress to withdraw Anthropic label as a supply-chain risk” (March 2, 2026)

OpenAI, “Our agreement with the Department of War” (February 28, 2026)

Decrypt/Yahoo Finance, “OpenAI Claims Safety ‘Red Lines’ in Pentagon Deal—But Users Aren’t Buying It” (March 2, 2026)

Techdirt, “OpenAI’s ‘Red Lines’ Are Written In The NSA’s Dictionary” (March 2, 2026)

Winbuzzer, “OpenAI Revises Pentagon Deal to Ban Domestic Surveillance” (March 3, 2026)

Just Security, “What Hegseth’s ‘Supply Chain Risk’ Designation of Anthropic Does and Doesn’t Mean” (March 2, 2026)

Above the Law, “DOJ Drops Defense Of Biglaw Executive Orders, Leaving Capitulating Firms Holding $940 Million Bag” (March 3, 2026)

UCMJ Article 92 (Failure to Obey Order or Regulation)

UCMJ Article 133 (Conduct Unbecoming an Officer and a Gentleman)

UCMJ Article 134 (General Article: Conduct Prejudicial to Good Order and Discipline)

Manual for Courts-Martial, United States (2024 ed.), Part IV, ¶63

Military.com, “Hegseth’s Move Against Sen. Mark Kelly’s Retirement Rank Raises Broader Stakes” (January 6, 2026)

Audacy/Connecting Vets, “Military Law Experts Weigh In on Hegseth Censuring Sen. Mark Kelly” (January 22, 2026)

United States v. Amazaki, 67 M.J. 666 (2009) (Article 133 standard)

United States v. Vaughan, 58 M.J. 29 (C.A.A.F. 2003) (due process notice requirement)

Prior Blog: “I Was Inside EMC When Hackers Stole the Keys to 40 Million Doors” (Morris Legal Technology Blog)

Prior Blog: “The Backdoor to Your Client’s Inbox” (Morris Legal Technology Blog)

Prior Blog: “AI Won’t Take Your Job. The Attorney Who Uses It Better Will.” Parts 1–2 (Morris Legal Technology Blog)

Prior Blog: “When Attorneys Stop Checking AI’s Work” (Morris Legal Technology Blog)

Prior Blog: “17 Subprocessors Deep” (Morris Legal Technology Blog)

Prior Blog: “The Heppner Problem: When AI Destroys Attorney-Client Privilege” (Morris Legal Technology Blog)

Leave a Reply

Discover more from The Technology Blind Spot

Subscribe now to keep reading and get access to the full archive.

Continue reading