Asia Defense

Imagination, Probability, and War

Recent Features

Asia Defense | Security

Imagination, Probability, and War

Artificial intelligence may help us make sense of large, complex questions, but imagining the future of war needs to remain a fundamentally human exercise.

Imagination, Probability, and War
Credit: Flickr/Mike MacKenzie

In the 1950s, military theorists in the United States and the Soviet Union assumed that the future of war was nuclear. After all, nuclear weapons had capped the most destructive war in human history and represented exponentially more firepower than any other class of weapon ever imagined – so, why wouldn’t they figure prominently in future conflicts?

The doctrinal response was to try to insert nuclear weapons or propulsion into virtually every aspect of warfighting. In short order, one or both sides of the Cold War had tested or deployed nuclear torpedoes, artillery shells, depth charges, landmines, demolition charges — even a nuclear bazooka whose use imperiled its own crew. Nuclear propulsion was trialed successfully on surface ships and submarines and less successfully on aircraft and cruise missiles. Both fictional narratives and war plans assumed that battlefield nuclear weapons use was a given in the event of any shooting war between major powers.

It never happened. The potential for escalation, the unpredictable spread of fallout from their use, and the enormous expense of building and maintaining those weapons all combined to make tactical nukes impractical; and ultimately, precision-guided weapons proved able to complete the same missions without the massive downsides. While the nuclear arms race never left us — and in many ways is heating up again — investment and development at this stage is largely toward strategic weapons rather than battlefield ones.

Nor is that the sole example of the martial imagination failing to survive contact with reality. From the idea that armored cavalry would dominate battlefields through the advent of professionalized militaries and the longbow to the pre-WWII conception that airborne light infantry could not overwhelm static defenses, our collective imaginations of what war will look like and the realities have always diverged.

It is possible to make an argument that we are getting better at predicting the future. With vastly more data and more powerful tools to build models from it, we are getting better at understanding and forecasting the outcomes of complex systems: weather fronts, population growth, elections. But modelling and prediction are two very distinct things; amassing more data is fundamentally different from having full command of all of it. Future-facing analysis should always be undergirded by humility, even — especially — when it is predicated on large datasets and sophisticated models.

So how do we build a better imagination for the future of war? Or, more to the point, should we bother? After all, the fact that so much rides on getting prediction right tends to drive strategic and military planning toward caution and resilience.

The doctrinal answer, to a large extent, seems to be investment in the constellation of technologies gathered together under the slightly nebulous heading of artificial intelligence (AI). And there is not much point in trying to slow this trend. The vast implications of AI for everything, from industry to finance to health care, combined with the logic of international competition in an increasingly nationalistic world, makes meaningful multilateral control of AI an incredibly heavy lift.

This is not the defense policy version of a human autoworker losing his or her job because a robot can make the same welds faster, more accurately, and without salary or sleep. No one is seriously suggesting that we put future trend prediction into the hands of neural networks; rather, the idea is that machines will sort and digest data and humans will make decisions on that basis.

The tricky part is managing the interface between the two. Helping to sort through data is still, after all, a long distance from fully imagining possible futures. And artificial intelligence operates along fundamentally different lines than its human counterpart; the former excels at pattern recognition and sorting huge sets of data, but neither of those is dispositive of creativity nor imagination. Humans have cognitive biases of all kinds and cannot hope to match the data-processing capacity of even rudimentary computer software but can think laterally and creatively. It’s possible that a sophisticated neural network, given enough latitude, could demonstrate something comparable to creativity — but would its output be recognizable as such to human decision-makers or relevant to possible futures? And if not — if we can’t train our machines to fill in our own cognitive limitations seamlessly — we are simply creating new, exploitable blind spots.

Those limitations apply to any form of computer-aided prediction, but our conception of war in particular is fundamentally shaped by our own mortality. The computer system in the 1983 film “WarGames” learns to avoid nuclear war because of its relationship with teenage hacker Matthew Broderick. But in cold, unsentimental reality, computer systems have no frame of reference for individual or existential fear; they simply execute their code until they stop. Whatever war is, whatever it is becoming, it is more than that.