Skip site navigation
Maryland Today
Athletics Arts & Culture Campus & Community People Research
Athletics Arts & Culture Campus & Community People Research
Research

‘A House of Dynamite’ Raises Questions About AI and Nuclear War

UMD Risk Analyst Discusses Breakout Film’s Accuracy and That Frightening Stat About Missile Defense

CHANG THE HOUSE OF DYNAMITE UBO 20240926 29648 R3 1920x1080

A character in the new Netflix movie “A House of Dynamite” deals with the threat of a global nuclear war from the White House Situation Room. Bilal Ayyub, a UMD engineering professor and director of the Center for Technology and Systems Management, spoke with Maryland Today about the film’s accuracy. (Photo by Eros Hoagland/courtesy of Netflix)

In the buzzy new Netflix thriller “A House of Dynamite,” the president and military leaders have 18 minutes to decide whether to retaliate against their adversaries as a nuclear missile of unknown origin hurtles toward the United States. In one scene of the heavily researched film, which critics call terrifyingly credible, an intelligence analyst raises the possibility that China might have fired the weapon in an artificial intelligence (AI) experiment gone wrong.

University of Maryland engineering Professor and director of the Center for Technology and Systems Management Bilal Ayyub has been paying close attention to the film’s reception. A risk analyst, he has counseled large firms and governments and published on geopolitical conflict and peace, and on Tuesday he will give a talk on the implications of AI in nuclear war at the Pugwash Conference on Science and World Affairs in Hiroshima, Japan, to mark the 80th anniversary of the bombing there. The conference, which unites global scholars and political leaders, shared the 1995 Nobel Peace Prize for its efforts on nuclear disarmament. 

As the world faces the highest risk of nuclear war since World War II, is the movie’s AI premise realistic? Where else did the filmmakers hit or miss the mark? Ayyub sat down with Maryland Today for an interview that has been edited for length and clarity, with short-range spoilers.

What’s your top takeaway from the film?
We saw the human element in so many ways—the characters’ emotions, psychology, family links, attachments to cities. The human dimension is an integral component to military and defense leaders; their jobs are not just abstract or on paper. The film portrayed this accurately: the shock of the officers in Alaska after detecting the missile over the Pacific, the tense countdown to learn whether the GBI (ground-based interceptor) hits the target, the concern the secretary of defense had about his daughter in Chicago. That stayed with me.

Midway through, an analyst tells the president that the Chinese were experimenting with AI-controlled missiles. Is that plausible?
Everything is plausible, but right now the likelihood is very small. A lot of missile simulation testing is classified, but we’re not seeing any scenarios with nuclear warheads powered by AI.

That said, there’s a race in AI among nations, and historically whenever there’s a new innovation, the first thing you think of is military use. There are reports about different branches of DOD pushing to have AI integrated into all their activities.

Would AI increase the risks of nuclear war? It could go either way. AI tech could enhance our ability to detect, track or even neutralize a missile launch, or identify adversaries’ assets and movements. But if we remove the human from the loop, take out the decision-making process and make everything automated, that could escalate things. There’s a difference between being intelligent and wise that AI might not navigate properly. 

The film controversially suggests that Ground-Based Interceptors (GBIs) have a 61% success rate of stopping an intercontinental ballistic missile, akin to a $50 billion “coin toss,” as described by the fictional secretary of defense. What does the research say?
The Ground-Based Midcourse Defense system, which is the one they’re talking about in the film, includes 44 GBIs in Alaska and California. Each has a success rate of 60%, but that’s one interceptor against one incoming missile. If you increase it to three against one, the success rate goes up to 94%. For other interceptor systems, the success rate is also higher.

It’s worth noting that an attack on the U.S. would most likely include several missiles with multiple warheads, including decoys. And our response would be more complex than firing one or two GBIs, so that was the least believable part of the film. But overall, I think it was adequately accurate. If we had an attack, would some of the missiles pass through our defenses? I’d say it’s likely.   

In a key narrative, the president must decide within minutes whether to launch retaliatory missiles and touch off a full-scale nuclear war. How did that resonate with you as a risk analyst?
Risk analysis informs decision-making and entails complex considerations including risk perception, risk tolerance and attitude, potential losses and payoffs, et cetera, but by the end of the process, it no longer comes down to probabilities. It’s black and white, option A or option B. We saw that in the film when Jake (the deputy national security adviser) tells the president, ‘It’s either surrender or suicide.’ The complexity of this decision was on the mark.

When we create risk-analysis models, we need them to examine the physical and cyber aspects—the missiles, radars, communication channels and their success probabilities. But at the same time we must include the human aspects: the leaders at the nine nuclear powers, their psychological profiles and preferences. And importantly, we need to engage them. There should be no surprise where a missile comes from, and we shouldn’t need to look for the phone number of Russian leaders, as the film portrays. 

AI at Maryland

The University of Maryland is shaping the future of artificial intelligence by forging solutions to the world’s most pressing issues through collaborative research, training the leaders of an AI-infused workforce and applying AI to strengthen our economy and communities.

Read more about how UMD embraces AI’s potential for the public good—without losing sight of the human values that power it.

Related Articles

Research

September 17, 2025
Open Philanthropy Award to Expand Work Tallying Billions in Terminated Grants, Projected Losses

Research

October 10, 2025
Terp Innovators Develop AI Tools to Remove Barriers for People With Disabilities