From Trinity to the Singularity: Why AI Doom Analogies Fail
Two favorite arguments of AI ‘doomers’—the nuclear ignition scare and the CFC ban—fail completely as analogies for pausing AI. Both involved small, controllable systems and quantifiable risks; AI is global, diffuse, and inherently uncertain.
At a recent conference in Istanbul on Beneficial AI, I participated in several panel discussions. One was on proactionary vs. precautionary approaches to AI risk management. Most of us agreed on a proactionary approach in which both risks and benefits are considered, with an emphasis on the value of continued progress and learning by doing. One participant – someone I would put in the “AI doomer” category – argued for stopping AI research, at least any AI work that could lead to artificial superintelligence.
Those of us on the proactionary side – and others who recognize the impracticality of an AI pause or AI halt – have pointed out an obvious issue: Any one company or country that halts work on AI will make vanishing little difference since other companies and countries will take over. If all the “responsible” and peaceful countries stop, less responsible and peaceful countries will take advantage and take a strong lead as they continue to develop the technology.
This AI doomer’s reply is to point to two past events: One is the Trinity risk – the short-lived concern that a nuclear detonation might ignite the atmosphere in a runaway reaction. The other is the global treaty banning CFCs, which were thought to be thinning the ozone layer. (There was never any “ozone hole”; that was a typical media distortion and exaggeration.) I will look at both of these comparisons.
First, I have to say that I do not expect superintelligent AI (SAI) anytime soon. You can see some of my reasons in my ongoing series on the singularity idea. A growing number of experts are now stating that they do not see SAI arising from current LLM approaches. A few of those are working on alternative approaches to AI but these approaches are relatively unfunded. I wish them luck but remain skeptical that their efforts will lead to AGI or SAI in the next few years.
Even so, I am extremely interested in the AI risk argument. It is an excellent area to apply the Proactionary Principle, the downsides are seen as potentially very large, even apocalyptic, and the upsides of AGI/SAI are massive. So, although I doubt that this is an issue we will face soon it could happen soon and perhaps unpredictably. Hence my interest. Back to the argument.
My position is easy to state: This doomer comparison argument is facile, unconvincing, and indefensible. Let’s look first at the most famous historical ‘pause’—when physicists briefly feared that the first atomic bomb might ignite the atmosphere.
The Trinity detonation
In 1942–43, during the Manhattan Project leading up to the Trinity detonation, some of the theoretical physicists — notably Edward Teller, Emil Konopinski, and Arthur Compton — raised the question of whether a fission bomb might ignite the atmosphere. The bomb would produce temperatures in the millions of degrees, hot enough to induce fusion among nitrogen nuclei in air. If these reactions could propagate faster than radiative cooling, they might theoretically cause a runaway “burning” of the atmosphere — consuming all air and oceans in a brief global catastrophe.
Throughout this episode, there was no panic, just methodical examination of the possibility. Edward Teller brought the matter to Hans Bethe, who was among the world’s experts on nuclear reaction cross-sections. Teller and Emil Konopinski did a preliminary back-of-the-envelope calculation and found the conditions probably wouldn’t sustain a chain reaction, but the question was not yet rigorously settled.
Bethe then performed a more detailed analysis in early 1943 and concluded the atmosphere could not ignite for two reasons: 1. The energy from gamma rays and fission fragments would be rapidly dissipated. 2. The required conditions for sustained nitrogen fusion (or deuterium burning in the oceans) could not occur at ordinary atmospheric density. If you want to see the demonstration, you can find it in the declassified 1946 LA-602 report, “Ignition of the Atmosphere with Nuclear Bombs” written by Konopinski, Marvin Marshak, and Teller in 1943.
LA-602 analyzed fusion cross-sections of nitrogen and oxygen at bomb-level temperatures; radiative losses (energy escaping as light before more reactions could occur); and the mean free path of particles at atmospheric density. The physicists concluded that the atmosphere was safe by a huge margin — the reaction could not sustain itself because radiation losses and low density would quench any fusion before it spread. This report was reviewed by Bethe and Compton, and later by Oppenheimer, who fully accepted the result.
Resolving the question did not take long. It was raised in 1942-early 1943 and resolved within weeks to a couple of months. By mid-1943, the scientists were completely confident it was impossible. The story became more widely known after the war when journalists learned of it. It also boosted by dramatizations by Arthur Koestler and H.G. Wells, and in Freeman Dyson’s recollections. But before the end of the war, the physicists were never close to cancelling the test — by 1945, they regarded the risk as zero in practical terms (probability < 10⁻²⁰, according to Bethe’s estimate).
Similar theoretical worries resurfaced before the 1980s heavy-ion collider experiments and before the CERN LHC startup (2008). Both times, physicists revisited the math and found the risk negligible.
How does this episode compare to proposals to pause or halt AI research due to the posited risk of AI destroying the human race?
Maybe half a dozen physicists were concerned. The issue was resolved in around two months (according to Hans Bethe).
It was possible to calculate an objective answer – unlike the AI doom scenario.
A pause was both reasonable and possible for a small number of physicists working on a single project. The same is clearly not true for a wide array of companies and countries working on AI, powered by strong incentives. We can see major differences along multiple dimensions.
Epistemic difference: Measurable risk differs from speculative risk. The Manhattan Project risk was quantifiable, verifiable, and objective. The physicists confronted a concrete physical question: can nitrogen fusion self-propagate at atmospheric density? The input data for the calculation consisted of cross-sections, densities, and temperature. All of these are measurable. This enabled a reliable answer to be produced within weeks, an answer universally accepted.
By contrast, AI risk involves unbounded sociotechnical systems with emergent behavior, recursive feedback, and moral consequences. No equation exists or can exist to calculate existential AI risk. There is no laboratory test and no agreed metric of “alignment.” Far from being calculable and objective, claims of existential catastrophe rely on chains of speculative reasoning, not falsifiable physics.
Institutional and coordination context: This was a single military project with a small number of physicists. AI research is distributed globally across corporations, open-source communities, and states with conflicting incentives. Even a pause of a few months in one or several jurisdictions would only shift activity elsewhere. This would be a massively more complicated coordination problem.
A better structural comparison would be global climate policy. The more tightly focused nature of the Manhattan Project issue enabled a brief but absolute pause. If Oppenheimer said, “stop until Bethe signs off,” everything stopped.
Incentives: Manhattan physicists had a single, state-funded goal and could afford to halt for verification because they would get no payoff for speed. By contrast, AI progress brings immediate economic, military, and reputational rewards. Pausing AI would involve somehow changing incentives across markets and nations, a far cry from a small group in one organization waiting for a calculation on which they could agree. Worldwide restrictive agreements are almost impossible to secure and execute. Thirty years of global climate (CoP) conferences with no noticeable difference to CO2 output show this.
Types of uncertainty: The type of physics uncertainty in the Manhattan Project contracts as more data becomes available. It is a bounded problem on which progress can be made, demonstrably and verifiably. The socio-technical uncertainty involved in AI risk (and benefit) expands with scale. New behaviors, risks, and opportunities appear as systems interact with humans and each other. When you cannot determine an upper bound on risk or even objectively assess it, the rational response is unlikely to be to indefinitely suspend research. Doing so will freeze potential solutions. The rational response to open-ended uncertainty is iterative control, not slamming the world with a coercive “freeze!” order.
Implicit in many arguments by AI doomers is an epistemological fallacy: We are not to proceed with AI development until we can prove that it is safe. In other words, doomers are using the precautionary principle. This approach assumes that we can sit back, halt AI development, and figure out safety in the abstract. This is not how knowledge works outside of formal disciplines. We have to grapple with developments as AI research and development continues. Knowledge can be gained only by continued engagement. A universal pause is not prudence, it’s abdication.
The “atmosphere ignition” story does not demonstrate cautionary paralysis. It actually demonstrates scientific responsibility in an area with objective, verifiable conditions.
CFCs: another poor comparison
Another historical case that doomers use to support their claim that a global halt is feasible and desirable is the Montreal Protocol agreement to ban CFCs. This may seem promising because it is indeed one of the very few examples of successful global environmental coordination. However, it succeeded because of factors that are almost entirely absent in the AI context. The Montreal Protocol was a rare success because it fit the template of a solvable collective action problem—clear cause, narrow industry, affordable alternatives, and verifiable compliance. AI governance is a much harder problem precisely because it meets none of those conditions
The CFC problem involved one class of chemicals produced by a small number of companies in a few industrialized nations. The causal link from CFCs to ozone depletion to UV radiation was empirically measurable and confirmed by satellite data. Replacing CFCs was relatively easy, calling for a minor change to ready alternatives, with no transformation of the global economy or revision of national security priorities.
AI development differs in that it crosses domains and is ubiquitous, pulling in language, robotics, intelligence analysis, healthcare, finance, weapons, and more. It involves a vast array of organizations, individuals, and governments who recognize AI’s vital role in innovation and competition. Unlike CFCs, there is no simple substitute for AI progress. Halting AI would be more like halting electricity or computing.
In the CFC case, incentives were not a problem. Companies could profit by producing new refrigerants. There was no pervasive economic downside. The Montreal Protocol worked because compliance could be verified by tracking production and trade of chemicals. AI research happens in software and algorithms. These are intangible, rapidly replicable, and often open source.
As with the Manhattan Project case, scientists could measure and model ozone chemistry and uncertainty shrank as more data became available. AI’s long-term risks are conceptual and speculative. Thinking about how and whether systems become dangerous is a matter of theoretical extrapolation and story-making.
The CFC ban is therefore completely unsuitable as a model for an AI pause or halt. The Montreal Protocol succeeded because it was simple, centralized, verifiable, and economically painless. AI is the opposite: it’s decentralized, strategic, and woven through every major industry and national interest. There are no easy substitutes for machine intelligence and no way to verify “compliance” across codebases and data centers.
The obvious weakness of these comparisons raises the question: How can AI doomers use these comparisons in their arguments?
It may be that those using these comparisons as an argument for pausing or stopping AI research actually believe them. Motivated to prove their view, they may avoid thinking critically about these comparisons. Or they may know that the comparisons fail to make their point but hope others don’t notice. In that case, they may feel that a misleading but effective argument is justified by the extreme nature of the AI doom they believe is almost inevitable and imminent. This would be like Christians who believe that non-believers go to Hell where they suffer forever (a truly evil idea!) and so find any means of convincing non-believers to be justified – up to and including torture. (However, torture may not induce belief but only a claim of belief.)
Finally, it seems to me that those who were most zealous and excited about the Singularity flipped around and became those most terrified of AI/singularity. Plenty of people are attracted to the AI doom scenario because humans are attracted to all kinds of doom scenarios. The psychology of extremity is the same: whether it’s salvation or damnation, it satisfies the craving for cosmic drama. What is more interesting is the conversion of those super-enthusiastic about AI and the singularity whose need for something extreme led them to flip from AI-as-savior to AI-as-damnation.


