
The U.S. military's use of Anthropic's AI model, Claude, is at the center of a conflict between the Pentagon and the AI company. Anthropic, which brands itself as focused on AI safety, has drawn red lines regarding the use of Claude for domestic surveillance and autonomous weapons, creating tension with the Defense Department, especially after Claude was reportedly used in a U.S. military operation in Venezuela. The Pentagon, under Secretary of Defense Pete Hegseth, is considering labeling Anthropic a supply chain risk due to concerns about the company's "woke-ism" and unwillingness to agree to all lawful use cases. While the Pentagon reviews its relationship with Anthropic, cutting ties could be counterproductive, as Claude is currently the only AI model approved for classified scenarios.
Sign in to continue reading, translating and more.
Continue