Anthropic has, at least in the short term, become a cautionary tale for AI companies operating in the government market — the company that held its principles and lost its contracts. OpenAI, meanwhile, has expanded its government footprint dramatically, announcing a Pentagon deal that it claims preserves those very same principles. The contrast defines the current state of the AI industry’s relationship with federal power.
Anthropic’s position going into its Pentagon negotiations was clear and, by the standards of responsible AI development, moderate. The company offered broad support for lawful military uses of its Claude AI system while excluding two categories it considered ethically unacceptable — autonomous weapons and mass surveillance. These were conditions, not demands; Anthropic was not asking the government to prove it needed these capabilities, only agreeing not to provide them.
The Pentagon’s refusal to accept any conditions, and the Trump administration’s subsequent decision to ban Anthropic from all federal use, reframed the ethics debate in starkly political terms. By characterizing Anthropic’s principled stance as ideological obstruction, the administration positioned itself as the defender of national security against corporate interference — a framing that proved politically powerful even if it was substantively questionable.
OpenAI CEO Sam Altman navigated the same terrain with different results. His Pentagon deal, announced the same night as Anthropic’s expulsion became final, came with assurances of identical ethical protections. He simultaneously closed a $110 billion funding round, underscoring OpenAI’s ability to thrive commercially even while engaging with government on terms that include meaningful ethical conditions.
Whether OpenAI’s experience will rehabilitate or further entrench the cautionary tale framing of Anthropic’s situation depends on how the deals play out. Hundreds of workers across the AI industry signed letters supporting Anthropic’s stand, and the company itself shows no sign of reconsidering. If OpenAI’s ethical protections hold, Anthropic may eventually be vindicated. If they erode, the cautionary tale will have a different protagonist than currently imagined.
