OpenAI is officially stepping into the world of defence. The company behind ChatGPT has landed a $200 million contract with the U.S. Department of Defence (DoD) to pilot frontier AI solutions over the next year. It’s a significant pivot for the San Francisco-based AI lab—one that signals how artificial intelligence is becoming deeply embedded in national security infrastructure.
The contract is part of a one-year pilot project aimed at applying OpenAI’s most advanced models across “warfighting and enterprise domains.” That includes applications in areas like cybersecurity, data analysis, logistics, and even military healthcare, while remaining compliant with OpenAI’s strict usage policies.
“This agreement allows OpenAI to bring cutting-edge AI to the U.S. government in a way that is both safe and aligned with our mission,” a company spokesperson said in a statement.
OpenAI won a $200M U.S. DoD contract
— Rowan Cheung (@rowancheung) June 17, 2025
The one-year pilot will see OAI deliver frontier AI to address national security challenges in "warfighting and enterprise domains"
This marks the lab's foray into defense tech, while sticking to its usage policiespic.twitter.com/Sc636XFwCq
From Chatbots to Command Centres
The deal represents OpenAI’s first official entry into the U.S. defence sector under a new initiative called “OpenAI for Government.” According to sources familiar with the pilot, the Pentagon will evaluate how OpenAI’s language models can enhance a variety of operations, from protecting against cyber threats to supporting decision-making processes in complex scenarios.
Critically, OpenAI has emphasized that none of the systems involved will be used to develop weapons, engage in kinetic targeting, or carry out offensive military actions. The company’s updated usage policy, which was revised in 2024, permits government work only if it doesn’t cause harm or violate international norms.
A Carefully Calculated Policy Shift
OpenAI’s involvement with the military might have seemed unthinkable just two years ago. In 2023, the organization explicitly banned military applications of its technology. But as the capabilities of frontier models evolved—and as government interest in generative AI surged—OpenAI adjusted its position.
The company now allows military use with guardrails, stating that its AI tools can be deployed for defensive and non-lethal purposes, such as securing networks, improving logistics, and aiding in administrative tasks.
This new contract puts that policy to the test.
Not Just OpenAI—A Bigger Movement in Defence Tech
OpenAI isn’t the only tech company partnering with the U.S. government. Rivals like Anthropic, Palantir, and Anduril have also secured Pentagon deals as the DoD ramps up its investment in AI-driven technologies.
But OpenAI’s entrance into the defence space is significant because of its mainstream reach and influence. With more than 100 million weekly users on ChatGPT and billions of API calls per month, it’s one of the most widely adopted AI platforms globally.
The company is expected to integrate its work with existing government clients—including NASA’s Jet Propulsion Lab, the Air Force Research Laboratory, and Los Alamos National Laboratory—under its new “OpenAI for Government” banner.
What the Pilot Will Do
While details remain limited, sources familiar with the program say the pilot will focus on:
- Automating data-heavy tasks across multiple defence agencies
- Enhancing cybersecurity infrastructure using AI-driven pattern detection
- Supporting health-related AI applications for veterans and active-duty service members
- Streamlining internal communications and logistics
OpenAI will also work closely with Pentagon officials to assess the impact, scalability, and safety of its models in high-stakes environments.
U.S Defence Plan
The pilot is set to run through mid-2026. If deemed successful, it could lead to a larger, multi-year contract and potentially make OpenAI a long-term partner in U.S. national defence.
The deal also sets a precedent. With OpenAI now officially in the defence arena, questions are mounting around the role of private AI firms in military policy, the risks of overreach, and how ethical boundaries will be upheld.
Still, for now, OpenAI appears to be threading the needle—offering its powerful tools to the government, while maintaining the kind of ethical framework that has long defined its brand.
Conclusion
This isn’t just another contract. It’s a defining moment for OpenAI—and for the future of AI in national security. As governments race to integrate frontier technologies, partnerships like these will shape how artificial intelligence is used in defence, diplomacy, and beyond.
The stakes are high. The timeline is short. And OpenAI, once wary of military engagement, is now front and center in the conversation.