
OpenAI has secured a landmark agreement with the United States Department of Defense to deploy its artificial intelligence models within classified military systems.
The agreement comes at a time of shifting dynamics between artificial intelligence companies and the Pentagon. Until recently, Anthropic’s Claude system had been the only frontier AI model authorised to operate within classified military systems.
However, the Defence Department reportedly set a deadline for Anthropic to loosen certain internal guardrails and permit its systems to be used for “all lawful use”, warning that failure to comply could risk a $200 million contract and result in the company being labelled a “supply chain risk”.
Anthropic has maintained that its reservations regarding autonomous weapons and mass surveillance arise from technological limitations and gaps in current legal frameworks. Meanwhile, a Pentagon official confirmed that Elon Musk’s Grok, developed by xAI, has now been cleared for classified use, with other firms close to finalising similar arrangements.
In this evolving environment, OpenAI has urged the Defence Department to provide equal contractual terms to all AI providers. "We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements," Sam Altman wrote.
Against this backdrop, OpenAI formalised a landmark agreement with the United States Department of Defense to deploy its AI models within classified military cloud networks, a major step in deepening ties between Silicon Valley and the Pentagon.
Chief Executive Officer Sam Altman announced the development on X, stating, "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network." He added that the Pentagon had shown "a deep respect for safety and a desire to partner to achieve the best possible outcome".
OpenAI confirmed that its models would operate solely on secure cloud infrastructure and that field deployment engineers would be assigned to oversee implementation and ensure compliance.
Altman emphasised that OpenAI’s foundational safety principles are written into the agreement. These include a prohibition on domestic mass surveillance and the requirement that humans retain responsibility in the use of force, including within autonomous weapons systems.
"The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman said. "We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted."
An OpenAI spokesperson previously confirmed that the company maintains the same “red lines” as Anthropic when engaging in military work, particularly opposition to domestic surveillance programmes and fully autonomous weapons use.
Read More: OpenAI To Open New Offices in Bengaluru and Mumbai as India Becomes Key Growth Market!
The agreement marks a pivotal moment in AI’s integration into national security. While strengthening ties between technology firms and the military, it also underscores the importance of strict safety commitments, legal compliance and responsible deployment. The evolving defence-AI partnership will likely shape future regulatory and ethical standards globally.
Disclaimer: This blog has been written exclusively for educational purposes. The securities mentioned are only examples and not recommendations. This does not constitute a personal recommendation/investment advice. It does not aim to influence any individual or entity to make investment decisions. Recipients should conduct their own research and assessments to form an independent opinion about investment decisions.
Investments in the securities market are subject to market risks, read all the related documents carefully before investing.
Published on: Mar 2, 2026, 1:40 PM IST

Team Angel One
We're Live on WhatsApp! Join our channel for market insights & updates
