In the summer of 2020, a landmark experiment unfolded at a Johns Hopkins laboratory. Observers tuned into a virtual dogfight between a seasoned F-16 pilot and an AI agent trained by a private contractor. The result was startling: the AI won every engagement, 5–0. The contest – DARPA’s AlphaDogfight Trials – played out like an eSports match, with algorithms pitted against a human pilot on a simulator. Viewers watched as the AI never faltered, “easily shot down” its human foe time after time. This wasn’t a movie scene but real evidence of rapid advancement: an AI “flyer” had decisively outmaneuvered its human adversary. Even so, experts cautioned that this test was partly theater. As naval aviation specialist Missy Cummings noted, the dogfights were “cool” demonstrations but likely artificial theater, chosen because programming them was relatively easy. Still, the result underscored a larger shift: machine-learning systems are now contenders in a domain once dominated by flesh-and-blood pilots. The AlphaDogfight Trials were a proof of concept, showing that a learning algorithm could pilot an aircraft using only simulated sensors and rules. For proponents, it proved the value of human-AI teaming; for skeptics, it was a publicity stunt. In either case, it revealed that the battlefield is already moving from human-versus-human combat to a hybrid of human and artificial agents.
The Last Human Fighter Pilot?
The AlphaDogfight was more than a novelty – it foreshadowed an unfolding reality. Today’s air forces are actively testing manned-unmanned teaming, where human pilots operate alongside (or even direct) autonomous drones. Top U.S. commanders envision next-generation jet fighters controlling swarms of AI wingmen. The Air Force has been trialing the Skyborg program – an unmanned “loyal wingman” drone designed to fly alongside crewed jets, sharing targeting data and even engaging enemies under minimal human guidance. In exercises, small $4 million Valkyrie drones have buzzed alongside F-35s and F-22s, scouting ahead, jamming radars, or firing missiles, all under a pilot’s high-level command. “We can talk about a formation of a manned aircraft controlling multiple unmanned aircraft,” says Air Force Secretary Frank Kendall. In this vision, swarms of cheap drones multiply a pilot’s effectiveness. A single human pilot could orchestrate half a dozen sensors-and-missiles-packed drones, each controlled via software. Programs like DARPA’s X-61A Gremlins (recoverable payload drones) are also pushing this trend. Even allied air forces are joining: Australia’s “Loyal Wingman” program with Boeing has built several autonomous fighter-sized drones, and France, China, and Russia are reportedly developing similar projects.
The implications are clear: someday there might be no cockpit at all. Anecdotally, some commentators have quipped that the “last” fighter pilot has already been born – a reference to ever-faster automation making manned combat jets obsolete in coming decades. Whether literally true or not, the message is that the human role in dogfights is shrinking. In unmanned air combat, machine-learning pilots (trained via reinforcement learning) can run millions of simulated engagements, see thousands of enemy missile scenarios, and find maneuvers that no human with a g-suit could execute. And they never tire, panic or succumb to G-forces. The AlphaDogfight Trials demonstrated this edge: a software pilot with millions of simulated hours flew far more crisply and boldly than any human could. Critics point out, however, that simulation is simpler than real combat; the AI didn’t manage logistics, fuel limits, or multi-ship coordination. Yet DARPA’s explicit goal is to build trust in autonomous air combat: these algorithms were meant to learn reliability and deconfliction. As one researcher put it, having AI win dogfights will “spur a range of research” benefiting the world. The Pentagon is betting on it.
The result of all this is a provocative question: Who, if anyone, will fly the jets of the future? The last human fighter pilot – and by extension the last soldier on any front – is no longer a technical impossibility. Unmanned combat aircraft (UCAVs) could someday supplant human pilots entirely. In fact, such ideas are already seeded in current programs: Skyborg’s wingman drones may eventually carry heavy munitions, and experimental vessels like unmanned combat jets (sometimes called robot fighters) are in development. Even NASA and defense labs are discussing prototype transatmospheric drone fighters. When that day comes, asking “Who will pilot the jet?” may be moot – it will be code, not a person, pulling the triggers.
When Robots Bleed Oil, Not Blood
If the skies are filling with artificial pilots, on the ground things are changing too. Autonomous drones are already reshaping wars today. In Ukraine, for example, small AI-enabled quadcopters and “kamikaze” drones have become ubiquitous. In 2025, Ukraine’s former commander-in-chief Gen. Valerii Zaluzhnyi reported that tactical drones now account for roughly two-thirds of Russian equipment losses. A study cited by Zaluzhnyi found these drones “twice as effective as every other weapon in the Ukrainian arsenal”. In practice, that means cheap commercial UAVs, sometimes modified with onboard target-recognition software, are out-killing tanks and artillery. American and European forces have taken note: swarm tactics and AI-navigation (like Ukraine’s Ghost Dragon drones that learn to avoid jamming) are now front-line adaptations.
An example of an advanced military drone on a runway. Modern armed forces are experimenting with autonomous drones – from surveillance platforms to attack quadcopters – that operate with AI guidance. [Image credit: Pexels]
Meanwhile, major drone manufacturers and startups are racing to build autonomous attack craft of every shape. California’s Anduril Industries, founded by former Oculus CEO Palmer Luckey, epitomizes this new sector. Luckey openly states that his Lattice AI platform powers autonomous weapons designed “to swiftly win any war”. Anduril already sells autonomous sentry towers (armed sensor stations) and small attack drones. Its CEO boasts that drone swarms’ “speed and low cost” make them potent deterrents because they “do not cost … human life”. In practice, Anduril’s drones are on the battlefield: it delivered hundreds of Altius-600M attack drones to Ukraine (small suicide UAVs that crash into targets) and helped deploy guard towers on the U.S.–Mexico border. Other American firms (like Shield AI and Kratos) and global vendors (from Turkey’s Baykar to China’s CASC) offer autonomous fighters or kamikaze drones as part of their arsenals.
Beyond the battlefield, this shift raises stark imagery and questions. When a drone’s engine fails, it spills oil, not blood. The human cost is removed from the immediate explosion. This could make sending machines into harm’s way politically easier – one reason proponents highlight autonomy’s lure. As Luckey argues, showing adversaries that “you have weapons that … don’t cost human life” restores a form of credibility in deterrence. Yet it also means war’s casualties become data points, not grieving parents, potentially lowering the threshold to open fire. Cyber defense expert Paul Scharre warns of an emerging “singularity” in warfare: as tactics speed up, humans won’t be fast enough to respond. In AI-driven battle tests, decisions occur at electronic speeds. An autonomous drone can detect, decide, and dive on a target in fractions of a second – long before a human can mentally process the threat.
This isn’t just theory. In 2017, robotics leaders penned an open letter to the UN warning that lethal AI weapons would allow conflict “at timescales faster than humans can comprehend”. The letter declared such autonomy a potential “third revolution in warfare,” capable of unleashing unprecedented scale and speed. Indeed, opponents of bans highlight that the first strikes in a future war might fly before a commander even knows the war started. As one Pentagon report puts it: we could see an “AI arms race” where machines exchange strikes in microseconds while human officials race to catch up.
Yet not all military thinkers fully trust the machines. Some compare today’s tests to chess – AI easily trounced Carlsen – but real war is messier. The dogfight trials, for example, were fully simulated, without real motion dynamics or electronic warfare. This matters: actual combat involves ambiguous information, civilian obstacles, and even hacking attempts. Missy Cummings (the ex-Navy pilot) and others note that current AI lacks common sense and failsafe judgment. The Pentagon’s own AI pilot “Glock” admitted that dogfighting was chosen partly because it’s fun and simple to script. Future conflicts will involve many edge cases – can an AI, for instance, refuse a friendly-target command when a human boots are needed? These are precisely the ethical and practical dilemmas regulators are now wrestling with.
Battle at Machine Speed
War doctrine is being rewritten for an age of “hyperwar,” when engagements unfold at machine speed. Chinese military thinkers openly discuss this. PLA analyst Hanghui described an approaching “singularity” where battle tempo surpasses human cognition. “The human brain will no longer be able to handle the ever-changing battlefield,” he wrote, meaning most decisions must be ceded “to highly intelligent machines”. In effect, that would force soldiers and commanders to trust AI to calculate, decide, and even fire – a profound shift from an era of human judgment.
For real militaries, the concern is twofold. First, speed: will humans and machines form effective teams when the AI can react thousands of times faster? There’s evidence one human operator could supervise multiple drones, but only within strict rules. Any true battlefield singularity would dissolve traditional command: exchanges might happen so fast that by the time orders reach a human, it’s too late. In this scenario, humans become more like overseers than instant decision-makers. This unsettles doctrines built on human initiative. As one strategy article cautions, planners must grapple with the fact that an AI can fire a missile “before a commander even recognizes the engagement”.
Second, scale and security: autonomous war opens new fronts. Cyber and electronic warfare now intersect with physical battles. Imagine two fleets of AI-controlled submarines and drones, constantly adapting algorithms. The Reuters exposé on the U.S.-China AI arms race warns that navies like Australia’s are building “Ghost Shark” AI submarines that could prow after surface fleets, cheap and in large numbers. In this new setting, a single act of sabotage or code hijacking could cause machines on one side to turn on themselves or fire on neutral parties. The open letter to the UN explicitly warned of “weapons hacked to behave in undesirable ways”. In short, if war becomes computation, it might be fought as much in server racks and code exploits as on battlefields.
All these developments have pushed military minds to convene. In 2024, dozens of nations (excluding China) signed an AI-in-military “blueprint” in Seoul, stressing that humans must still control lethal force. Even at top-level talks, President Biden and Xi Jinping agreed that humans – not AI – should decide on nuclear use. These statements highlight uneasy realities: the speed and opacity of AI combat are deeply worrisome. As Ukrainian General Zaluzhnyi quipped, every commander now wrestles with drones, data, and AI just as Napoleon grappled with mass conscription or radio. But in this case, millions of lines of code can decide a mission.
Silicon Soldiers and the AI Arms Race
Behind these technological shifts are powerful new players. The military-industrial complex of yesteryear is morphing into a digital-military-industrial complex dominated by Silicon Valley and global tech giants. Former Google CEO Eric Schmidt exemplifies this change. As Pentagon AI chief, Schmidt has urged pouring tens of billions into military AI to “get the country AI-ready” by 2025. He warns that traditional defense procurement (designed for hardware) is “antithetical” to software-driven systems. Yet Schmidt’s influence is controversial. Critics note that the same man who blames China for AI aggression also invests in Chinese AI funds. A recent investigation revealed Schmidt’s nonprofits and partners quietly funded Chinese AI ventures even as he publicly pushed a hard line. It’s a prime example of the dual-use quandary: Western tech leaders both profit from and worry about the AI arms race. As Gizmodo wryly observed, Schmidt’s new company “White Stork” is building autonomous drones while he warns of an AI arms race – highlighting a potential conflict of interest in the age of AI.
Elon Musk is another conflicted figure. On the one hand he has famously derided the costs and failures of modern weapons (calling the F-35 “sh*t” at one point) and warned of AI doom. On the other, his companies now sit at the heart of defense. SpaceX’s Starlink satellites became vital for the Ukraine conflict, enabling drone coordination and communications. In practice, Musk’s personal whims can shape battle. He even paused Starlink support for certain Ukrainian drone strikes, insisting his tech was not to be used offensively. This made headlines: the Pentagon admitted some officials were not briefed on Musk’s secret talks with world leaders, and observers began calling SpaceX “too big to fail” for U.S. defense. The unusual power of a single CEO in warfare underscores a new reality: Silicon Valley’s planners are de facto generals now.
Meanwhile, dozens of smaller defense startups are funded by venture capital. Companies like Anduril tout how their computer-centric platforms can “protect our people” by winning wars with minimal bloodshed. In China, private drone firms (often backed by state money) are legion; reports show PLA scientists using open-source AI like Meta’s Llama to build military chatbots and intelligence tools. Even U.S. tech firms are lining up Pentagon contracts. The result is a global scramble not only for chips and tech talent, but for moral high ground. The international community is in fact discussing limits: the 2024 Seoul summit produced a document insisting on maintaining human control over weapons and preventing AI from facilitating WMD proliferation. But China declined to sign any binding measures. As one analyst noted, “We will never have the whole world on board” on AI ethics. The US-China tech rivalry thus plays out in a broader competition over AI doctrines: one side favors rules and alliances, the other steadily builds up autonomous capabilities.
All the while, tech billionaires preach caution with one breath and develop new weapons with the other. Musk warned that “AI is far more dangerous than nukes,” yet he also chairs an agency pushing defense reforms. Schmidt acknowledges the dangers of AI but won’t pause development lest China leap ahead. Some commentators call this posturing: Schmidt and Musk are accused of profiting from the very AI militarization they publicly fret about. It’s an uneasy mix of ethics and profit that critics term “techno-authoritarianism”: a world where private tech titans help dictate war policy.
When War Becomes Code
As armies digitize, the essence of war is shifting from blood to bytes. Future combat will involve not just battalions and jets, but servers and sensors. Nations will field armies of AI-driven machines whose casualties are measured in hardware units, not humans. But whether this makes wars easier or harder to start is unclear. Some strategists warn that removing your own soldiers from risk may lower the bar to battle. If decisions can be made on perfectly rational algorithms with fewer ethical constraints, a leader might push the red button more readily. Others argue the opposite: the complexity of autonomous warfare could create new checks. After all, if an enemy can retaliate in milliseconds with AI, the risk of accidental escalation grows – perhaps deterring first strikes.
One thing is certain: the character of warfare is changing. Instead of cities littered with bomb craters and bloodied battlefields, future conflicts might look eerily antiseptic. Fighting could be waged through satellite links and smartphone apps, with human generals “overshadowing” orchestration. Computer networks will become the front lines: an enemy unit might first be detected by an AI, targeted by another AI, and neutralized by a drone that’s never seen a human enemy in person. In that sense, war becomes a form of massive, distributed computation.
By 2025 and beyond, strategists expect doctrine to evolve accordingly. The U.S. has already signaled this in its 2022 AI guidance, saying that “for each autonomous weapon system, humans must exercise meaningful control” – but what that means is still a subject of study. For now, the frontline mission remains understanding the limits and capabilities of algorithmic force. The generals of the future may ask not “Where are the troops?” but “Where is the server rack?” as much as the parade.
Ultimately, the growing role of AI in war raises profound philosophical and ethical questions. Will humans become obsolete in war? Possibly as direct combatants, but many argue we will still be needed as supervisors and judges of the machines. As Paul Scharre and others note, prediction tools do well at analyzing probabilities, but judgment – considering strategy, diplomacy and ethics – is still a human specialty. Wars will still be political; algorithms can win battles but might struggle with the bigger picture.
Is war more or less likely when battles are fought by robots and code? History offers no easy answer. In one view, as Anduril’s blog put it, “only superior military technology can credibly deter war” – suggesting that advanced AI arms could enforce peace by making conflict suicidal. In another view, weaponizing the digital domain could make conflicts lightning-fast and uncontrollable. The 2017 open letter warned of “destabilizing effects” once Pandora’s box is opened. At the very least, AI adds unpredictability: a software glitch or cyberattack might ignite combat without human intention.
What does the battlefield look like without human faces? Imagine a war entirely run by machines: analysts liken it to two rival data centers disputing ownership of the world’s resources. In such a post-human war, there would be little time for diplomacy. Communications would hop across satellites, fiber links and 5G, each hack or deception being another salvo. The “fog of war” might lift in some ways – machines share a clearer picture – but new fogs (jamming, falsified sensor data, runaway algorithms) will emerge.
In sum, the line between war and computation is blurring. We are on a slippery slope where a mistake in an algorithm could have the gravity of a missile strike. As one military sage quipped, today’s weapons are no longer just made of steel, but also of software. Whether this makes future wars “better” (more humane and short) or “worse” (faster and more catastrophic) remains to be seen. What is certain is that we are stepping into an era where battles will be fought as much in code as with bullets, and that requires us to rethink strategy, ethics, and the very nature of human combat.