A judge questioned whether the Pentagon broke the law by targeting Anthropic after the company pushed back on its AI use in military operations.

San Francisco: A federal judge raised concerns that the Pentagon may have broken the law when it labeled AI company Anthropic a security risk. Judge Rita Lin said the move looked like punishment for the company speaking out against how its AI tools were being used by the military.
Lin said the situation appears to break the First Amendment, which protects free speech. This is because the Pentagon acted after Anthropic expressed concerns about its AI being used in military operations. The judge said it was her job to decide if Pentagon officials followed the law, not whether Anthropic should be allowed to work with the military.
The Pentagon calls itself the Department of War. Officials say they followed proper steps to determine Anthropic’s AI tools could no longer be trusted in important military moments. But the judge found this explanation troubling. She said the restrictions placed on Anthropic don’t clearly match stated national security concerns.
The defense secretary posted on social media that military contractors could not work with Anthropic at all. Department lawyers later admitted this went beyond what the law allows. The judge questioned why such a strong reaction was taken against Anthropic, noting these types of restrictions are usually only used against foreign enemies or terrorists.
The Pentagon plans to switch to other AI companies like Google and OpenAI over the next few months. They say they have safeguards to prevent Anthropic from interfering during this change. A separate legal case about this issue is being decided by a federal appeals court in Washington, D.C.