The United States National Security Agency is reportedly using Anthropic’s Mythos Preview artificial intelligence tool despite a formal supply chain risk designation issued by the Pentagon against the company, according to a report from Axios. The development highlights ongoing tensions between government security policy frameworks and the rapid adoption of advanced artificial intelligence systems within defense related environments. The report indicates that Mythos Preview is being used more widely within the department, although official confirmation from involved parties has not been provided.
According to Axios, the Mythos Preview model is being deployed in operational contexts within the US defense structure, even after the Pentagon raised formal concerns through a supply chain risk classification. Reuters noted that it was unable to independently verify the report at the time of publication. Anthropic, the National Security Agency, and the Department of Defense did not immediately respond to requests for comment outside regular business hours. The National Security Agency operates under the umbrella of the Defense Department, making its reported usage of the model particularly notable given the existing designation.
The reported use of Mythos comes amid broader discussions between US government officials and Anthropic leadership. Earlier in the year, US President Donald Trump’s administration held discussions with Anthropic’s chief executive officer regarding potential cooperation, marking the first known engagement of its kind following earlier disagreements between the Pentagon and the artificial intelligence company. Those disputes centered on how Anthropic’s advanced models should be deployed and governed within sensitive government environments, particularly where national security considerations are involved.
Concerns surrounding Mythos have been growing due to its advanced technical capabilities. The model has been described by Anthropic as its most capable system yet for coding tasks and agentic functions, meaning it can operate with a degree of autonomy in completing complex workflows. Experts have raised concerns that these capabilities could significantly enhance the ability to identify vulnerabilities in software systems and potentially develop methods to exploit them. The model’s capacity for high level code generation and autonomous task execution has been viewed as a double edged development, offering both productivity benefits and increased cybersecurity risks.
Security analysts have suggested that such capabilities could change the threat landscape by enabling faster discovery of weaknesses in digital infrastructure. This includes potential implications for critical systems that rely on software security and continuous monitoring. While advanced AI systems can also be used to strengthen defensive cybersecurity measures, the same tools may be leveraged to identify previously unknown vulnerabilities at scale. This dual use nature has contributed to growing regulatory attention and internal government debate about how such systems should be integrated into sensitive environments.
The reported divergence between formal Pentagon supply chain restrictions and operational use within the National Security Agency underscores the complexity of governing rapidly evolving artificial intelligence technologies. As AI systems like Mythos continue to advance in capability, questions surrounding oversight, deployment authority, and risk management frameworks are becoming increasingly central to policy discussions. The situation reflects broader uncertainty across government and industry regarding how to balance innovation in artificial intelligence with the need to maintain strict security standards in critical national infrastructure environments.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.