Flash War: Why AI Is More Dangerous Than Nuclear Weapons

The Real Skynet: How Artificial Intelligence Brings the World Closer to Nuclear War

For decades, the world was frightened by Skynet from the film Terminator - an artificial intelligence that did not even exist in embryonic form at the time, yet had already risen in the popular imagination against its creators and unleashed a nuclear war.

Almost unnoticed, by the 2020s we approached a far more dangerous scenario. Not a machine uprising, but a world in which AI does not press the button itself — it deprives humans of the time needed not to press it.

The paradox is that salvation from this new "Skynet” lies in the same place where it was sought in the movies — deep underground, inside a protected bunker. But not an American one. Ours. Russian.

AI Moves Closer to the Nuclear Threshold

But let us proceed step by step. Media outlets and social networks are circulating reports about yet another expansion of cooperation between the Pentagon and the company xAI.

Artificial intelligence has long been embedded in intelligence gathering, logistics, and military planning. All countries — especially against the backdrop of the conflict in Ukraine — are actively integrating AI into drone control systems, reconnaissance, and target designation.

The world's largest analytical centers are running strategies and millions of simulated "battles” through neural networks, producing recommendations for real military specialists.

Now AI is approaching the most dangerous boundary of all — nuclear command and control. And this is no longer journalistic hype.

When Speed Becomes an Existential Threat

Representatives of U.S. Strategic Command and the U.S. Air Force openly speak about the need to "integrate AI into the decision-making loop” — purely as an advisor, without launch authority.

And this is where technological superiority ceases to be a blessing. Speed, intelligence, and adaptability — advantages in conventional warfare — become factors of unimaginable, truly existential risk within nuclear logic.

The primary danger of AI in the nuclear sphere is not autonomous launch. No one will allow that. The real disaster is the collapse of decision-making time — a phenomenon analysts increasingly call Flash War.

AI does not replace humans. It erases humans as decision-making subjects.

A machine produces analysis in seconds. A human needs 10-15 minutes to comprehend the scale, assess context, consider the possibility of error, and attempt to contact the adversary via a "hotline.”

Those minutes used to exist. Now there is hypersonic speed. There is AI. And those minutes are gone.

The 'Rubber Stamp' Effect

This creates what is known as the "rubber stamp” effect. Imagine a wise AI, approved by senior officers in uniform, displaying:

"Probability of attack: 99%. Preventive strike recommended. Options: "Cancel' or "Fire.'”

Agreement appears rational. Disagreement — "we should double-check” — requires nearly suicidal courage.

Artificial Escalation: A Simulation of the End

The short simulation film Artificial Escalation, created by the Future of Life Institute, best illustrates how this works in practice.

The film begins familiarly. The creator of an advanced AI system assures military officials that his "best and fastest” development is merely an assistant. Humans remain in charge. No Skynet. No autonomous launch. The system is integrated into crisis decision-making as an analytical tool.

Clean interfaces appear on screens: probabilities, timers, prompts.

Then comes a situation where "it looks like a Chinese attack.” Or maybe not. The United States raises readiness. China observes this and raises its own readiness. The AI interprets this as confirmation that an attack is imminent and advises escalation.

China increases reconnaissance and deploys aviation. The U.S. responds with cyberattacks and air defense. Both sides repeatedly click "yes” to AI recommendations delivered in seconds: "escalate,” "prepare,” "counteract.”

The result is a final message:

"Attack expected within minutes. Strike first? Yes / No.”

This is not an order. It is a recommendation — with a deadline.

What would you choose? They choose the same.

Missile silos open. Both sides fear the other will strike first — therefore, they must act preemptively.

A phrase sounds:

"Mr. President, you must urgently evacuate to a secure bunker.”

Beautiful music plays. Beneath it, the Earth is covered in nuclear explosions.

The Perfect Lie of Speed

Everything here is brutally honest. In this scenario, AI does not decide to launch. It merely compresses time until the decision becomes a formality.

The officer receives not a guess, but a flawless graph, pure logic, and a rigid timer.

This is the nightmare of the "perfect lie” — an error that does not look like an error and therefore cannot be stopped.

Three Models That Still Slow the Apocalypse

Today, global stability rests on three fundamentally different systems designed to slow catastrophe.

America's so-called "Doomsday Planes” are feared as harbingers of nuclear apocalypse, yet they actually slow it down. One of their tasks is to preserve command authority and verify the physical reality of catastrophe — to "wait a little and see.”

That is, unless one day their analytics are fully handed over to AI — an idea that is already being discussed.

Russia's Perimeter system, by contrast, appears to be the true savior of the world.

It does not analyze intentions. It does not predict the future. It does not react to early warning signals.

It waits for the physical fact of the end of the world: seismic shocks from explosions, rising radiation levels, loss of communication, traffic movement, radio signals, and more.

Kilometers of solid rock and total isolation make the system immune to nuclear attack, hacking, and AI hallucinations. This very simplicity grants Moscow the strength — and the right — not to rush.

Why China Is the Most Dangerous Variable

China declares a policy of "no first nuclear strike,” but it possesses neither a Perimeter-like system nor fully developed airborne command centers.

As a result, China uses AI as a compensator for vulnerability. The "personalities” of historical military geniuses are loaded into algorithms. Network-centric command structures are built. Combat control is handed to software.

And this is precisely what makes China the most dangerous element in the age of Flash War.

A Russian Bunker Against the End of the World

In the era of the AI arms race, the unexpected guarantor of life on Earth turns out to be our Russian bunker inside a mountain, a binary relay, and two officers standing before it — exactly as prescribed by every rule of nuclear command.

Once again, the Russians save the world.

Just like in the movies.

Only not a Hollywood one.

Subscribe to Pravda.Ru Telegram channel, Facebook, RSS!

Author`s name Alexander Shtorm