Why having “humans in the loop” in an AI war is an illusion
AI's role in warfare raises urgent ethical concerns

The Pentagon's legal battle with Anthropic underscores the increasing reliance on artificial intelligence in warfare, particularly in the conflict with Iran. AI is now actively involved in military operations, generating targets and controlling autonomous drones, raising critical questions about the effectiveness of human oversight. The assumption that humans can interpret AI intentions is fundamentally flawed, as these systems operate as opaque 'black boxes,' leading to potential miscalculations and ethical dilemmas in combat scenarios.
Key Takeaways
- 1.
The Pentagon's current AI guidelines assume humans can interpret AI intentions, which is flawed.
- 2.
AI systems are often opaque, making it difficult for human operators to understand their decision-making processes.
- 3.
Investment in understanding AI technology's intentions remains significantly lower than in developing capable models.
Get your personalized feed
Trace groups the biggest stories, videos, and discussions into one feed so you can stay current without scanning ten tabs.
Try Trace free