In a secret A US military base located about 50 miles from the Mexican border—exact location: classified—defense contractor Anduril is testing a remarkable new use for a large language model. I attended one of the first demonstrations last year. From a sun-bleached airstrip, I watched as four jet planes, code-named Mustang, appeared on the horizon to the west and took off over a desolate landscape of rocks and brush. The prototypes, miniaturized for demonstration, fell into formation, their engines whirring as they approached.
The sun burned my eyes, so I turned to a nearby computer monitor under a dusty tarp. With a few key clicks, a fifth plane appeared at the edge of the screen, its outline looking suspiciously like that of a Chinese J-20 stealth fighter. A young man named Colby, wearing a black baseball hat and sunglasses, gave the order to tackle the computer-simulated bogey: “Mustang intercept.” That’s when AI stepped in. A model similar to the one that powers ChatGPT analyzed the command, spoke to the drones, and then responded in a dispassionate female voice: “Mustang collapsing.” Within a minute or so, the drones had converged on the target and then, with minimal fuss and virtual missiles, destroyed it.
Anduril’s demonstration illustrates how excitedly the defense industry is experimenting with new forms of AI. The startup is developing a larger autonomous fighter for the US Air Force, designed to fly alongside manned aircraft, through a project called Fury. Many of these systems are already autonomous, thanks to older AI technology, but the idea is to incorporate aspects of LLMs into the chain of command, relaying commands and displaying useful information to pilots. Sergeant Chatbot at your service.
It’s a bit strange. But then, defense technology always is. We spend and spend, on good things and a lot of shit. Here, the promise is efficiency: death chains are complicated and AI, in theory, streamlines them (a euphemism for it makes them more deadly). And whoever controls this technology, say four-star American strategists, will dominate the world. The mantra is why the United States is so keen to curb China’s access to cutting-edge AI, and also why the Pentagon intends to increase spending on it in the coming years. The plan is surprising but not surprising. The war in Ukraine, with its ubiquitous, low-cost drones equipped with computer vision, has demonstrated the value of autonomy on the battlefield.
The generative AI boom, meanwhile, has multiplied interest. A 2024 Brookings report shows that funding for federal contracts related to artificial intelligence grew 1,200 percent from August 2022 to August 2023, with the vast majority coming from the Department of Defense. This was before President Trump’s return to office. His administration is pushing for even more strategic AI: Its 2026 trillion-dollar defense, or rather “war,” budget includes the first-ever allocation dedicated to AI and autonomy, at $13.4 billion.
This means that AI companies themselves have a lot to gain by making big promises about what they can do in war. This year, Anthropic, Google, OpenAI, and xAI have each received AI-related military contracts worth up to $200 million. This is a major change from 2018, when Google pulled out of Project Maven, an effort to use AI to analyze aerial imagery. Emelia Probasco, who studies the military use of AI at Georgetown University, says Project Maven, now led by Palantir, has become, in the form of Maven Smart Systems, one of the military’s most widely used AI tools. It makes sense, he says: large language models are good for intelligence gathering because they excel at analyzing large amounts of information. They are also well suited for cyber crime because of their ability to write and analyze code. “The slightly scary ambition is for AI to be so smart that it can prevent war or just fight and win it,” says Probasco. “Like some sort of magical fairy dust.” At the moment, the current models are still too unreliable, error-prone and inscrutable to make decisions on the battlefield or to receive direct control of any hardware.
A key challenge for these players, then, is how to deploy AI in ways that both play to their strengths and minimize risks. In September, Anduril and Meta together bid for a US Army contract worth up to $159 million to develop another AI-powered application: a robust augmented reality helmet display for soldiers. Anduril says the system, which will deliver mission-critical information to warfighters while sensing their surroundings, will use a new generation of more capable AI models that are better able to interpret the physical world in real time.
And what about the fully robotic soldiers? I called Michael Stewart, a former fighter pilot who ran the US Navy’s Office of Disruptive Capabilities and was involved in pushing the Fifth Fleet’s AI experimentation into 2022. Stewart now runs a consulting firm and speaks to military planners around the world. He expects the future of warfare to be highly automated. “In 10, 15, 20 years, you’re going to have fairly autonomous robots,” he says. “That’s where you’re going.” And assuming these systems have LLM for brains, they will not only become a new kind of witness to the horrors of war. They will be able to explain, in their own words, what actions they have taken and why.

