The United States is positioning itself as “the world’s undisputed [artificial intelligence-enabled] fighting force.” This bold declaration comes from the Department of War, which unveiled a new strategy earlier this month to accelerate the deployment of AI for military purposes.
The “AI Acceleration Strategy” aims to establish the US military as the frontrunner in AI warfighting. However, critics argue that the strategy’s hype overshadows the actual limitations of AI capabilities. This approach has been labeled as “AI peacocking”—a public display of AI adoption that masks the reality of unreliable systems.
Details of the US AI Strategy
While several militaries worldwide, including those of China and Israel, are integrating AI into their operations, the US Department of War’s AI-first mantra sets its strategy apart. The goal is to enhance the military’s lethality and efficiency, with AI being seen as the primary means to achieve this.
The strategy encourages experimentation with AI models, aims to dismantle “bureaucratic barriers” to AI implementation across the military, supports investment in AI infrastructure, and pursues major AI-powered military projects. One such project aims to convert intelligence into weapons “in hours, not years,” a concept that raises significant ethical concerns.
Reports indicate increased civilian casualties in Gaza due to the Israeli military’s use of AI-enabled decision support systems, which rapidly transform intelligence into targeting information.
Another project seeks to place American AI models into the hands of three million civilian and military personnel at all classification levels. The rationale behind granting such widespread access to military AI systems remains unclear, as do the potential impacts of disseminating military capabilities among civilians.
The Narrative vs. The Reality
In July 2025, an MIT study revealed that 95% of organizations reported zero return on investment from generative AI. This was attributed to the technical limitations of tools like ChatGPT and Copilot, which struggle to retain feedback, adapt to new contexts, or improve over time.
Generative AI’s shortcomings, often obscured by marketing hype, highlight the gap between AI’s potential and its current capabilities.
AI encompasses a wide range of technologies, from large language models to computer vision models, each with distinct applications and success rates. Despite these differences, AI applications are often bundled together in a globally successful marketing agenda reminiscent of the early 2000s dotcom bubble, where marketing was treated as a valid business model.
This marketing-driven approach appears to influence how the US positions itself in the current geopolitical climate.
A Guide to ‘AI Peacocking’
The Department of War’s AI-first strategy resembles a guide to “AI peacocking” rather than a genuine plan to implement technology effectively. AI is presented as the solution to every problem, including those that do not exist. The marketing surrounding AI has instilled a fabricated fear of technological inferiority, which the new AI strategy exploits by suggesting a technically advanced military posture.
However, the reality is that these technological capabilities often fall short of their claimed effectiveness. In military settings, such limitations could lead to devastating consequences, including increased civilian casualties.
The US is heavily investing in a marketing-led model to deploy AI across its military, potentially lacking in technical rigor and integrity. This approach could expose vulnerabilities within the Department of War, particularly when these fragile systems fail—likely during crises in military contexts.
As the US moves forward with its AI strategy, the balance between technological advancement and ethical responsibility remains a critical consideration. The implications of this strategy will continue to unfold, shaping the future of military operations and international relations.