CONNECT WITH US

AI & Deeptech

The NSA Is Using the AI the Pentagon Banned and Washington Is at War With Itself Over It

The NSA Is Using the AI the Pentagon Banned and Washington Is at War With Itself Over It

The NSA is using Anthropic's most powerful model, Mythos Preview, despite top officials at the Department of Defense — which oversees the NSA — insisting the company is a "supply chain risk," according to two sources who spoke to Axios. One source said the NSA was among the unnamed agencies that Anthropic had given access to the model, while another said Mythos was being used more widely within the department. Let's Data Science Anthropic, the NSA, and the Department of Defense all declined to comment.

To understand why this contradiction exists, you need to understand what Mythos actually does — and why every intelligence and security agency in the US wants access to it regardless of what their bosses are saying in court. Anthropic restricted access to Mythos to around 40 organizations, contending that its offensive cyber capabilities were too dangerous for a wider release. The company only publicly announced 12 of those organizations. Other organizations with access to the model are using it predominantly to scan their own environments for exploitable security vulnerabilities. Let's Data Science In other words, Mythos is the most capable automated vulnerability scanner ever built — and any agency responsible for defending critical infrastructure that doesn't have access to it is defending blind against an adversary who might eventually get access to something similar.

That calculation is driving the bureaucratic chaos playing out in Washington right now. Civilian agencies like the Departments of Energy and Treasury are responsible for safeguarding critical sectors like the electric grid and financial system. Accessing Mythos would help them determine where companies and local governments may be vulnerable to cyberattacks and how to help them prepare. Axios The Pentagon's position — that Anthropic is a supply chain risk because it won't commit to allowing its models to be used for "all lawful purposes" without restriction — simply doesn't matter to an agency whose nightmare scenario is the Chinese government taking down the US power grid.

One administration official summarized the split with uncomfortable clarity: "All the intel agencies use Anthropic. Every agency except War wants to. That's because Anthropic doesn't want to kill people and War's position is 'don't tell us what the f*** to do.' But if you're the Department of Energy, you don't give a f*** about that. You're worried about the Chinese attacking the energy grid. So you want Anthropic." Axios It's the kind of quote that accidentally explains everything about how this situation developed. The Pentagon's feud with Anthropic is about control and doctrine. Everyone else's relationship with Anthropic is about capability. Those are different conversations, and they are now producing different outcomes inside the same government.

The backstory is worth laying out clearly. The Defense Department moved in February to cut off Anthropic and force its vendors to follow suit — the case is ongoing. The military is now broadening its use of Anthropic's tools while simultaneously arguing in court that using those tools threatens US national security. Let's Data Science Anthropic CEO Dario Amodei refused to allow Anthropic's models to be used without restrictions — specifically, the company will not permit its AI to be used for mass surveillance or to develop fully autonomous weapons. The Pentagon says those definitions are nebulous and it needs assurances it can use AI systems for "all lawful purposes." Anthropic is suing the Pentagon for blacklisting the company over this dispute. Axios

At the same time, the White House appears to be trying to thread a needle the Pentagon has made it very hard to thread. Anthropic CEO Dario Amodei met White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent on Friday to discuss the use of Mythos within government and Anthropic's wider plans and security practices. Both sides described the meeting as productive. Sources said next steps after the meeting were expected to focus on how departments other than the Pentagon engage with the model. Let's Data Science

Some parts of the US intelligence community, plus the Cybersecurity and Infrastructure Security Agency (CISA, part of Homeland Security), are testing Mythos. Treasury and others want it. KFGO The UK's intelligence counterparts already have access — the NSA's counterparts in the UK said they have access to the model through the country's AI Security Institute. Let's Data Science The US is in the peculiar position of having its own spy agencies using a model that the department those agencies report to has formally declared unsafe for use.

A second administration official told Axios the government "has a responsibility to evaluate every model to see where the frontier of tech is" — but accused Anthropic of using "fear tactics" by issuing warnings about how Mythos could supercharge hacking. "They're using this Mythos cyber weapon to find friendly ears in the government," the official said. "They're succeeding." Axios That last line is perhaps the most honest admission in the entire saga. Anthropic's strategy of limited, controlled release — sharing Mythos with select organizations and letting them discover its capabilities firsthand — has functioned as the most effective marketing campaign in the company's history, even if that wasn't the intent.

One source close to negotiations put the strategic stakes plainly: "It would be grossly irresponsible for the US government to deprive itself of the technological leaps that the new model presents. It would be a gift to China." KFGO That framing — capability deprivation as a national security risk in its own right — is precisely the argument that is winning inside the administration, one agency at a time.

For the startup world and the broader AI industry, this story carries lessons that extend well past the specifics of the Anthropic-Pentagon dispute. The most capable AI is now so clearly valuable for defensive security purposes that even the government agencies formally opposed to its provider are quietly using it anyway. That is the logical endpoint of building something genuinely transformative. It also suggests that Anthropic's refusal to compromise on its usage terms — the position that triggered the Pentagon's blacklist in the first place — may turn out to be a sustainable negotiating stance rather than a fatal error. When your product is indispensable enough, the institutions that say they won't use it eventually find a way to use it anyway.

One Defense official told Axios at the height of the Pentagon-Anthropic feud that the only reason the talks were ongoing is: "These guys are that good." Axios Washington is at war with itself over an AI company. The AI company appears to be winning.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It's possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi