Military AI is characterized by epistemic opacity. Unlike civilian AI, which is subject to public scrutiny and litigation when it fails, military systems are shielded by classification and state-secret privileges. This lack of transparency prevents external validation and makes it nearly impossible to correct for human biases or technical errors.
In today’s Daily Planet, Sullivan argues that because military AI lacks the feedback mechanisms of the market and the law, governance cannot simply mirror civilian models. Instead, it requires "upstream" design constraints and new oversight frameworks that can ensure accountability even within a domain defined by strategic secrecy.
Military institutions operate within incentive systems and governance environments that weaken—or in some cases invert—the feedback mechanisms that ordinarily restrain technological deployment. Strategic competition rewards early integration under uncertainty, costs of experimentation are frequently externalized, and operational secrecy limits opportunities for external scrutiny. These dynamics suggest that military AI is better understood not as a normal technology, but as an abnormal one. Drawing on prominent use cases, this article examines each of these structural features to illustrate why military AI requires a distinct approach to governance.
From:
AI may be a “normal” technology in the boardroom. In the military, where costs are externalized and secrecy is default, it’s anything but.
https://www.lawfaremedia.org/article/military-ai-as--abnormal--technology