AI Security vs. National Security: The Pentagon's Clash with Anthropic and OpenAI
- TechTrek Admin

- 2 days ago
- 6 min read
By Anvi Anand ’27, Oceana Li ’27, Kate Choi ’28, Diya Poluru ’29;
Tech Column; The Lawrenceville School, NJ
In March 2026, a dispute over the use of artificial intelligence (AI) in military environments arose between the San Francisco-based AI research company Anthropic and the U.S. government. Tensions between the company’s policies and military usage of their products were high, especially after the Venezuela incident, in which AI was utilized in an operation to capture Venezuela's leader, Nicolás Maduro. Out of a growing concern in Washington that advanced AI will reshape intelligence analysis, military logistics, and cyber operations, the Pentagon (the headquarters of the US Department of Defense) pushed to integrate commercial AI systems rather than relying solely on traditional defense contractors. This decision has increased competition among top AI labs, as companies race to become the primary supplier of models to government agencies. From this conflict, Anthropic secured a major contract with the Pentagon, reportedly worth around $200 million. This contract has allowed Anthropic’s model to become one of the first commercial language-learning models deployed in military environments.

In January 2026, the U.S. was on a mission to capture Venezuela’s leader, Nicolás Maduro. It was later revealed that the Pentagon used Claude during the operation, which caused a conflict between Anthropic’s safety policies and the military context. The Venezuela incident resulted in dozens of deaths among Venezuelan forces and the capture of Maduro. The casualties involved raised questions about whether the deployment violated the company’s policies, which state that they “will not accept terms allowing mass domestic surveillance or fully autonomous weapons.” Regarding the operation, however, Anthropic said they could not comment on whether Claude was used, but emphasized that all uses of Claude must comply with their usage policies. This lack of transparency, prompted Pentagon officials to question whether Anthropic was a reliable partner. In their defense against public accusations, the Pentagon argued that military technology providers should not impose restrictions that could limit battlefield operations.
For years, Anthropic has been loyal to its mission statement of being “helpful, honest, and harmless.” In light of recent events, the company believes that granting the U.S. government unrestricted access to its models would violate this crucial value, among many others. CEO Dario Amodei stated that the Pentagon’s proposal “can undermine, rather than defend, democratic values”, leading to mass domestic surveillance and fully autonomous weapons. Mass domestic surveillance through Claude would allow the model to access residents’ sensitive and private information. According to Amodei, this surveillance is currently legal because the government has not yet caught up with AI’s emerging capabilities. If the AI is strong enough, the model could piece together small pieces of information about a citizen, such as their location, their interactions online, and stitch together someone’s whole life together, at a massive scale, thus enabling mass surveillance.
AI has grown highly advanced and functional; however, there still exist limitations to its ability to work on its own. Amodei states that “frontier AI systems are simply not reliable enough to power fully autonomous weapons” without proper oversight systems. However, the Pentagon has emphasized a desire to have full operational control and to integrate Claude into confidential systems. In other words, they do not want a private company supervising and dictating how Claude is used in their national defense and private missions.
Some think that Anthropic made this decision as a play for popularity and to appeal to the public, as its stock increased the day after the incident, and Claude became the most downloaded free app on Google and Apple Stores after the news. Although increased popularity may have been a beneficial side effect, rejecting the Pentagon’s deal solely for public recognition would have been a major risk for Anthropic due to potential financial loss and the unpredictability of the public’s response. Furthermore, the company’s public statements, including those from its CEO, provide little concrete evidence that this was their actual motive. Thus, it can be reasonably inferred that Anthropic’s response to the Pentagon’s proposal is due to the company’s priority regarding ethical concerns and human security rather than bulk profit, as they are not willing to supply their AI models for abuse of privacy and potential harm without justified oversight.
Anthropic's refusal prompted other major tech companies across the industry to respond. Most notably, OpenAI. When Anthropic reacted, OpenAI quickly stepped in, mere hours after the Trump administration designated Anthropic a “supply-chain risk” and directed federal agencies to stop using its technology, and offered its resources to the Pentagon. They struck a deal allowing their AI models to be used in military systems, but according to a statement on X from Sam Altman, “Two of [OpenAI’s] most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW [Department of War] agrees with these principles, reflects them in law and policy, and we put them into our agreement”. The statement suggests that the Pentagon has agreed to certain restrictions, though the specific terms of their agreement remain unclear. The two companies seem to have different strategies and values, whether that is emphasizing the responsible use of AI and its safety concerns, or deployment and large-scale partnerships.

In addition to OpenAI’s response to the situation, other major tech companies are also making announcements. On Thursday, March 5th, Microsoft announced that Anthropic’s technology would be kept available to customers across Microsoft platforms, excluding Department of War work. The day after, Google joined in, stating it would keep offering Anthropic’s services to clients outside of defense work. Similarly, Amazon confirmed that AWS customers can keep using Claude for non-defense-related work. The situation is sparking debate surrounding AI in military and safety, with the industry being scrutinized more closely than ever. Balancing ethics, business, and national security, the issue clearly runs far deeper than just two AI companies and the government.
OpenAI’s deal with the Pentagon highlights the greater adoption of AI across federal and local institutions. From 2023 to 2024, the number of AI use cases across government agencies doubled from 710 to 1,757. This trend will only exponentially increase as agencies further invest in AI tools to improve decision-making and efficiency. Regardless of the scale of adoption, innovation and integration will have complex moral and technical implications in society. At the federal level, public health, national security, and environmental protection are among numerous sectors that will face both opportunities and challenges as AI systems become increasingly embedded throughout government.
The U.S. military and security agencies face the most ethical and legal concerns. Currently, the U.S. Department of Homeland Security (DHS) utilizes machine learning for pattern detection through drones and facial recognition technology (FRT). The U.S. Department of Defense (DOD) has incorporated AI into autonomous weapons and has designed AI algorithms to analyze data from satellite imagery and surveillance footage. These applications are promising in the realm of technological progress as AI is able to maximize the efficiency at which governments can strategize and approach emergencies.
Nevertheless, there are moral drawbacks to the unregulated development and usage of these tools. With its historical challenges, FRT will continue to create privacy risks, especially if adopted for mass surveillance. Bias in training datasets will also inevitably surface in AI algorithms, agents, and autonomous weapons, in addition to FRT. OpenAI CEO Sam Altman acknowledged the importance of preventing ethical hazards to democracy, prohibiting
OpenAI’s tools from being used for mass surveillance or autonomous weapons.
As AI becomes increasingly capable and the U.S. government furthers collaboration with technology companies, regulations must prioritize strict transparency and responsible AI use. These measures enable partnerships such as OpenAI’s deal with the Pentagon to promote sustainable AI development across all sectors, ensuring that innovation benefits society without exacerbating social injustice and abusing the right to privacy.
References
Butts, D. (2026, March 3). OpenAI’s Altman admits defense deal “looked opportunistic and sloppy” amid backlash. CNBC. https://www.cnbc.com/2026/03/03/openai-sam-altman-pentagon-deal-amended-surveillance-limits.html
Elias, J. (2026, March 6). Google joins Microsoft in telling users anthropic is still available outside defense projects. CNBC. https://www.cnbc.com/2026/03/06/google-says-anthropic-remains-available-outside-of-defense-projects.html
Gold, H. (2026, February 28). Openai strikes deal with Pentagon hours after Trump Admin bans anthropic | CNN business. CNN. https://www.cnn.com/2026/02/27/tech/openai-pentagon-deal-ai-systems
Kreps, S., Valerie Wirtschafter, D. B., Wirtschafter, V., MacCarthy, M., Brooke Tanner, C. F. K., & Brooke Tanner, C. F. K. (2025, October 29). How can government use AI systems better?. Brookings. https://www.brookings.edu/articles/for-ai-to-make-government-work-better-reduce-risk-and-increase-transparency/#:~:text=In%202023%2C%20the%20federal%20government,improve%20perceptions%20of%20government%20too.
Novet, J. (2026, March 6). Microsoft says Anthropic’s products remain available to customers after Pentagon blacklist. CNBC. https://www.cnbc.com/2026/03/05/microsoft-says-anthropics-products-can-remain-available-to-customers-after-security-risk-designation.html
Palmer, A. (2026, March 7). Amazon says anthropic’s claude still OK for AWS customers to use outside defense work. CNBC. https://www.cnbc.com/2026/03/06/amazon-aws-anthropic-claude-pentagon-blacklist.html
Yahoo! (n.d.). Tech stocks Today: Anthropic says it will fight Pentagon’s “supply chain risk” label, Nvidia stops H200 chip production.
%202_e.png)



Comments