In simply over every week, negotiations over the Pentagon’s use of Anthropic’s Claude expertise fell by way of, the Trump administration designated Anthropic a supply-chain danger, and the AI firm mentioned it could struggle that designation in courtroom.
OpenAI, in the meantime, shortly introduced a deal of its personal, prompting backlash that noticed customers uninstalling ChatGPT and pushing Anthropic’s Claude to the highest of the App Retailer charts. And a minimum of one OpenAI government has stop over issues that the announcement was rushed with out applicable guardrails in place.
On the most recent episode of TechCrunch’s Fairness podcast, Kirsten Korosec, Sean O’Kane, and I mentioned what this implies for different startups looking for to work with the federal authorities, particularly the Pentagon, as Kirsten questioned, “Are we going to see a altering of the tune slightly bit?”
Sean identified that that is an uncommon state of affairs in a lot of methods, partially as a result of OpenAI and Claude make merchandise that “nobody can shut up about.” And crucially, this can be a dispute over “how their applied sciences are getting used or not getting used to kill folks” so it’s naturally going to attract extra scrutiny.
Nonetheless, Kirsten argued, this can be a state of affairs that ought to “give any startup pause.”
Learn a preview of our dialog, edited for size and readability, under.
Kirsten: I’m questioning if different startups are beginning to take a look at what’s occurred with the federal authorities, particularly the Pentagon and Anthropic, that debate and wrestling match, and [take] pause about whether or not they wish to be going after federal {dollars}. Are we going to see a altering of the tune slightly bit?
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
Sean: I ponder about that, too. I believe no, to some extent, within the close to time period, if solely as a result of while you actually attempt to consider all of the totally different firms, whether or not they’re startups or much more established Fortune 500s that do work with the federal government and particularly with the Division of Protection or the Pentagon, [for] quite a lot of them, that work flies beneath the radar.
Normal Motors makes protection autos for the Military and has completed [that] for a really very long time and has labored on all electrical variations of these autos and autonomous variations. There’s stuff like that that goes on on a regular basis and it simply by no means actually hits the zeitgeist. I believe the issue that OpenAI and Anthropic bumped into throughout the final week is like, these are firms that make merchandise {that a} ton of individuals use — and likewise extra importantly, [that] nobody can shut up about.
So there’s simply such a highlight on them, that naturally highlights their involvement to a stage that I believe many of the different firms which might be contracting with the federal authorities — and, particularly, any of the war-fighting parts of the federal authorities — don’t essentially must take care of.
The one caveat I’ll add to that’s quite a lot of the warmth round this dialogue between Anthropic and OpenAI and the Pentagon could be very particularly about how their applied sciences are getting used or not getting used to kill folks, or in components of the missions which might be killing folks. It’s not simply the eye that’s on them and the familiarity we have now with their manufacturers, there’s an additional factor there that I really feel is extra summary while you’re desirous about Normal Motors as a protection contractor or no matter.
I don’t suppose we’re going to see, like, Utilized Instinct or any of those different firms which have been framing themselves as twin use again off a lot, simply because I don’t see the highlight on it and there’s simply not the form of shared understanding of what that impression could be.
Anthony: This story is so distinctive and particular to those firms and personalities in quite a lot of methods. I imply, there have been quite a lot of actually attention-grabbing thought items about: What’s the function of expertise in authorities? [Of] AI in authorities? And I believe these are all good and worthwhile inquiries to ask and discover.
I believe additionally, although, that this can be a very curious lens by way of which to look at a few of these issues as a result of Anthropic and OpenAI will not be truly that totally different in quite a lot of methods or the stances they’re taking. It’s not like one firm is saying, “Hey, I don’t wish to work with the federal government” and one is saying, “Sure, I do.” Or one is saying, “You are able to do no matter you need.” and [the other is] saying, “No, I wish to have restrictions.” Each of them, a minimum of publicly, are saying, “We wish restrictions on how our AI will get used.” It simply looks as if Anthropic is digging of their heels much more about: You can’t change the phrases on this method.
After which on high of that, there additionally simply appears to be a persona layer the place, the CEO of Anthropic and, Emil Michael — who quite a lot of TechCrunch readers would possibly keep in mind from his Uber days, and is now [chief technology officer for the Department of Defense]. Apparently, they simply actually don’t like one another. Reportedly.
Sean: Sure, there’s a really massive “ladies are preventing” factor right here that we should always not overlook.
Kirsten: Yeah, slightly bit. There’s, however the implications are slightly bit stronger than that. Once more, to drag again slightly bit, what we’re speaking about right here is the Pentagon and Anthropic coming right into a dispute by which Anthropic seems to have misplaced, though I ought to say they’re nonetheless very a lot being utilized by the navy. They’re thought of a vital expertise, however OpenAI has sort of stepped in, and that is evolving and can seemingly change by the point this episode comes out.
The blowback has been attention-grabbing for OpenAI, the place we’ve seen quite a lot of uninstalls of ChatGPT I believe surged 295% after OpenAI locked within the take care of the Division of Protection.
To me, all of that is noise to the actually vital and harmful factor, which is that the Pentagon was looking for to vary present phrases on an present contract. And that’s actually essential and may give any startup pause as a result of the political machine that’s occurring proper now, significantly with the DoD, seems to be totally different. This isn’t regular. Contracts take eternally to get baked in on the authorities stage and the truth that they’re looking for to vary these phrases is an issue.


