The central issue is the conflict between Anthropic and the Department of War (Dow) over the use of AI, specifically Claude, in military applications. Anthropic refuses to remove safeguards against mass domestic surveillance and fully autonomous weapons, leading to the Dow threatening to label them a supply chain risk. This stance is supported by OpenAI, which shares similar red lines. The Pentagon's position is seen as contradictory and potentially damaging to future partnerships with AI companies, with some critics arguing it mirrors tactics used by China. Despite the conflict, there are calls for continued dialogue and a resolution that allows for the safe and reliable use of AI in national security.
Sign in to continue reading, translating and more.
Continue