Summary: The article discusses the ethical complexities of using AI in warfare. It highlights the lack of rules and accountability for AI failures in conflict situations. The US Department of Defense’s move to implement Generative AI is questioned due to glitches, security issues, biases, and unpredictability in AI tools. Concerns arise over assigning blame and accountability in cases of accidents. The article suggests that mundane applications of AI, unlike warfare, are better suited due to their lower risk and potential effectiveness. Boring AI, though less exciting, is more reliable and morally straightforward.