In the first half of 2023, a Pentagon unit focused on digital and artificial intelligence conducted a series of experiments with OpenAI models. The testing involved working with large language models (LLMs) like GPT-4 through the Microsoft Azure Government cloud service. This service is specifically designed for U.S. government agencies and holds the necessary security certifications. A key detail is that at the time of these trials, OpenAI's acceptable use policy explicitly listed a prohibition on activities related to "military and defense purposes." The ban was only officially removed by the company in January 2024.
This incident calls into question the real effectiveness of self-regulation by leading AI companies and the transparency of their interactions with government structures. OpenAI, which had positioned itself as a cautious and ethical developer, directly prohibited the use of its models for weapon development, harming people, or for "military and defense" applications. However, its partnership with Microsoft, which actively sells its cloud and AI services to the U.S. government and military, created a legal and technical "workaround." In effect, the Pentagon gained access to powerful AI models through the infrastructure of a major contractor, which may not have formally violated the terms of a direct end-user license agreement.
Technically, access was provided via the OpenAI API deployed in the isolated Microsoft Azure Government cloud. This means Pentagon specialists could interactively work with the models, customizing them for specific tasks. Among the potential testing areas reported are natural language processing for analyzing vast document arrays, synthesizing intelligence data reports, and assisting in writing code for cybersecurity. Microsoft, as the integrator, provided the platform and likely technical support, while OpenAI, according to its previous statements, should not have been directly involved in such military projects.
The companies' responses were restrained. An OpenAI representative confirmed that the use policy was changed in January 2024, and the company now only prohibits weapon development or causing harm, but permits work with some government agencies, including the military, in areas such as cybersecurity. Microsoft stated that its work with OpenAI and the U.S. government complies with all applicable laws and regulations, emphasizing that it continually consults with partners on policy matters. AI ethics experts and human rights advocates expressed serious concern, calling this a classic case of "goalpost shifting": initial strict ethical prohibitions are gradually eroded under pressure from business interests and government demand.
For the industry, this is a signal that declared ethical principles in AI can be flexible, especially when major government contracts and strategic partnerships with IT giants come into play. Users and developers who trusted OpenAI as a company with a "safe" approach may reconsider their expectations. The situation also exposes a complex chain of responsibility: when a powerful AI model is delivered as a service through a partner, control over its end-use is significantly weakened. This sets a precedent for other AI startups that may face similar pressure from their investors and integration partners.
The prospects now are tied to the scandal's aftermath and OpenAI's new policy. Will the company audit past projects implemented through Microsoft? How will it control the use of its models for military purposes by new clients? The question remains open whether OpenAI's leadership was aware of the Pentagon testing its technologies before the policy change. This case will inevitably strengthen calls for external, legislative regulation of military AI applications, as corporate self-regulation has proven vulnerable. The AI arms race among major powers continues, and technologies from leading labs, even with initial bans, are being increasingly drawn into it.
No comments yet. Be the first!