Before going on to consider other questions about the singularity idea it would help to take a step back. To have clear conversation about singularities, we should be clear what we are talking about. Although writers often refer to “the” singularity, that term means different things to different people. There are overlaps but the differences matter.
The original use of “singularity” was in mathematics and physics. The current usage when talking about AI differs from those original conceptions but owes something to them. In mathematics, a singularity is a point at which a function, curve, or surface behaves abnormally — such as becoming infinite, undefined, or losing its usual smoothness. Singularities often represent points of breakdown in a mathematical model, such as infinite values or undefined behavior.
In physics, a singularity is a point in space-time where physical quantities — such as density, curvature, or gravitational field — become infinite or ill-defined, and the known laws of physics break down. Physical singularities arise when mathematical models of the universe predict singular (undefined or infinite) behavior in quantities that are normally finite.
The mathematical and physics kinds of singularity have some resemblance to recent version of “the singularity” such as accelerating, exponential change or an intelligence explosion. Other senses are more loosely related. Setting aide math and physics, what are the main types of singularity concept?
The big three
The singularity idea goes back to statistician I.J. Good and his 1965 article “Speculations Concerning the First Ultraintelligent Machine” in which he wrote:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The idea was elaborated and given a big boost by Vernor Vinge in 1983 opinion piece and a 1993 essay, “The Coming Technological Singularity.” In his fascinating 1988 book, Mind Children: The Future of Robot and Human Intelligence, roboticist Hans Moravec does not use the term but does go into detail on what amounts to an intelligence explosion.
The person perhaps most associated with the singularity today, Ray Kurzweil, conveyed his own version of the idea in 1990 in his book, The Age of Intelligent Machines, and then in multiple books over the years. Good’s conception is one of the singularity as accelerating change. Vinge also considers the same conception but also the version in which the singularity is a prediction horizon. Kurzweil’s version is also focused on accelerating change and specifically in the form of exponential acceleration.
We should also credit Stanislaw Ulam who in a 1958 conversation with John von Neumann “centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human a airs, as we know them, could not continue.”
Anders Sandberg has identified nine conceptions of singularity (see further reading at the end) with some additional variations but shares the view of myself, Bostrom, and Yudkowsky that there are three main groupings: accelerating change, prediction horizon, and intelligence explosion leading to superintelligence. David Chalmers notes a loose sense that “refers to phenomena whereby ever-more-rapid technological change leads to unpredictable consequences.” He sees the core sense of the term as “a moderate sense in which it refers to an intelligence explosion through the recursive mechanism set out by I. J. Good, whether or not this intelligence explosion goes along with a speed explosion or with divergence to infinity.
Accelerating change (AC)
The rate of change gets faster over time. This need not be exponential at any particular point – or even over the whole trajectory – but is usually seen as exponential or superexponential change. While the core idea is closely tied to growth in computational capacity and power, this sense of the term often explicitly connects to economic growth and social change. (My objection to the view that acceleration must be a smooth exponential is why I prefer to talk of “surges” – see part 1). This sense of “singularity” is especially associated with Ray Kurzweil and Vernor Vinge.
Note that this is usually presented as exponential change or growth in effects. If technological change follows a smooth curve, we can make reliable predictions about some of its capabilities and effects. We can have a reasonably accurate idea of when key technological capabilities will arrive, especially artificial general intelligence (AGI) and artificial superintelligence (SAI). Hence, Ray Kurzweil makes numerous predictions by extrapolating exponential change. Ray often points out something true and still underappreciated: that our intuitive thinking about the future tends to make linear extrapolations; we struggle to think in terms of exponential change. One highly practical way this difficulty manifests is in investing less because you don’t fully grasp the power of compound interest.
In an old story, a man makes a deal with the king in which he will place one grain of rice on the first square of a chess board. The number of grains will be doubled for each subsequent square. There are only 64 squares on a chess board, thinks the king, so he can surely afford the rice on the 64th square. On the third square would be a mere 8 grains. 32 on the 5th. If the king though ahead to the 10th square, he might start to worry as 512 becomes 1,024. Yet he would have no idea that, by the last square, he will have to pay out 92,233,720,368,547,758 grains of rice (92 quadrillion).
Prediction horizon (PH)
As superintelligent AI – or superintelligence combinations of humans and AI – emerges, the rate of change becomes so rapid that the future becomes utterly unpredictable based on our past knowledge and experience.
Back in the early and mid-1990s on the Extropians email list – the first transhumanist online forum and today the longest-running – we often referred to “the Wall”. This is one way of referring to the idea of a singularity as a prediction horizon. In other words, technological advance and/or the emergence of superhuman intelligence makes the future impossible to predict from our current limited perspective. This version of singularity is closely associated with Vinge and was hinted at earlier by Ulam.
You might argue that this would not be a real singularity because it has happened before. Or that it is a singularity but just the most recent and most dramatic of a series of singularities. (The same applies to the acceleration view.) Although on a very much longer timescale, you could argue that proto-humans and pre-historical humans could not conceive of the world that humans have created over the last 12,000 years or so.
The essential idea is that more advanced minds would have thoughts and practices that we cannot comprehend or foresee – whether those come from a distinct artificial intelligence or a superhuman intelligence that is our future.
Intelligence explosion (IE)
Highly intelligent systems can design even more intelligent systems. Humans create AI and AI creates superintelligent AI, and so on. Intelligent systems improve themselves in a positive feedback loop – at least until ultimate physical limits are reached. In Moravec’s view, this leads to every gram of matter in the universe is being used for computation. (See his 1988 book and his Extropy magazine essay, “Pigs in Cyberspace.”) This view is closest to those of Good and Yudkowsky.
I questioned the assumption of a strong feedback loop in part 1. I raised doubts about the immediate move from an AGI to SAI and therefore to an intelligence explosion. Many commentators wave their hands and see the transition as too obvious to need defending. In his analysis of multiple types of singularity, Anders Sandberg observed: “There is a notable lack of models of how an intelligence explosion could occur.” My view is that current LLM-AIs alone will not lead to an intelligence explosion. LLMs are not the droids you’re looking for.
One day, it seems quite possible that something like an intelligence explosion may happen. I doubt that will happen in the next decade – by 2035. Almost all the funding and energy is being funneled into LLM approaches to AI. Too little research focuses on hybrid AI systems and neurosymbolic/cognitive AI. It’s easy to get caught up in the remarkable and fabulous progress made by connectionist models. It’s easy to project continued advances up to and beyond the human level. The currently dominant approach will not get us to AGI.
I am far from alone in this view. Others who see LLMs as insufficient for AGI include Yann LeCun, Peter Voss, Sam Altman, Demis Hassabis, and Gary Marcus. LeCun: “…on the path toward [AGI], an LLM is basically an off-ramp, a distraction, a dead end.” Hassabis: “Deep learning… [is] definitely not enough to solve AI, [not] by a long shot.” Altman: “We need another breakthrough… language models won’t result in AGI.”] Another approach will be needed.
This may be the symbolic approach (GOFAI) or, more likely, a hybrid approach that combines the strengths of the transformer architecture with the symbolic approach. Such an approach is often referred to as neuro-symbolic AI. Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling.
My friend Peter Voss has written on this topic:
“INSA: Integrated Neuro-Symbolic Architecture: The Third Wave of AI, a Path to AGI”
“LLMs are not the Path to AGI”
“The Insanity of Huge Language Models (And a much saner, first-principles path to AGI)”
Gary Marcus also has a lot to say. One recent example:
“How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI”
If or when an intelligence explosion does happen, the pace of advance may seem explosive from a historical point of view but more like a surge than a spike as we live through it.
Relations between singularity types
Does it really matter which form of singularity we have in mind? Don’t AC, IE, and PH all have in common the view that machine intelligence will grow very powerful very quickly? The three forms do have some similarities but also important differences. Those differences will be clearer when we use more restrictive definitions.
The original conception by I.J. Good was of an intelligence explosion – one that comes only once AI has achieved superintelligence. Unlike Vinge, he does not explicitly or even implicitly assume that human-level AI/AGI will immediately become superintelligent. Otherwise, Good and Vinge have very similar views.
AC and IE: These may sound very similar. AC projects an accelerating pace of technological progress and resulting economic and social effects. IE expects accelerating technological progress leading to a sudden, discontinuous jump in intelligence. To some extent they represent difference emphases. AC puts more emphasis on the outcomes of growing computing power and intelligence. However, clearly an explosion in intelligence would also greatly magnify those outcomes – anything from a posthuman world of limitless possibility to the extinction of the human race.
AC and IE become more distinct and partly inconsistent visions if we take the explosion to imply not just continued increase in intelligence from the human level but a radical discontinuity. The AC model may well fit the situation over years and decades – as it has since the 1960s – but fail to apply when AI reaches human level or super-human level intelligence. The smooth exponentials seen in AC models cease to apply in case of a true and discontinuous intelligence explosion. It also possible that there could be an explosion followed by a return to an exponential path (or even to a slowdown).
[Throughout this discussion, I treat “intelligence” as if it were a unified, definable property. In reality, I don’t believe that and that rejection has important implications for achieving super-intelligence and for the types of outcomes we can expect. But that’s a future discussion.]
AC and PH: These two models look similar in some ways. In the AC view, as exponential accelerating continues it may reach a point where change is so fast that we, looking ahead from today, can have little idea what the future looks like beyond a certain point. We can say that the prediction horizon is a likely result of accelerating change and an even more likely result of an intelligence explosion.
We will get a different result if we insist on a more restrictive version of PH. On the AC view we can project a long way into the future – at least in terms of computing power, intelligence, and perhaps some economic outcomes. According to a strict PH model, we cannot understand anything about the future after the point at which superintelligence AI arises. Of course, the AC model could apply up until the advent of superintelligence.
We could also narrow the application of the PH model, making it more compatible with AC. We could regard the prediction horizon as applying to many of the outcomes of superintelligence while holding that the AC model can still say some useful things about the trajectory of the future. For instance, we may have no clue about the kinds of minds that may emerge or what projections they will devote themselves to, but we may insist that scarcity and basic economic laws will still apply. Scarcity will be pushed back drastically but not entirely eliminated due to competing projects with time urgency and an inability to expand outward faster than light.
IE and PH: These seem to fit well since the intelligence explosion would presumably be the point at which the prediction horizon pops up. If you take PH extremely strictly and generally, you could not talk of a continuing IE after PH. If you cannot know anything after the horizon, you cannot know whether accelerating change continues. But, again, that may be an excessively restrictive definition and conception. We might postulate that we can expect intelligence to continue exploding past the PH but that we can know little or nothing more than that.
Although these three main models share some qualities they also diverge, the degree of divergence depending on how narrow and strict we define them. It remains useful to distinguish between them. AC seems easier to handle than IE or PH. It is important to discuss whether we should expect an IE once we have AGI or SAI, and whether we can do anything to maintain AC but avoid IE and a PH.
In part 3 of this series, I will look at past singularities, the wide range of periods over which singularities happen, and speed vs. intelligence acceleration.
More information
Anders Sandberg (2009): “An overview of models of technological singularity.” This essay also appears in The Transhumanist Reader.
David Chalmers, “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17:7-65, 2010.
“Three Major Singularity Schools”, September 30, 2007. Eliezer Yudkowsky.
Another relatively early and interesting take is Damien Broderick’s 2002 book, The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies.
Smart, J. 1999-2008. “Brief history of intellectual discussion of accelerating change.”
Part 1 of the Singularity Series was “Putting Brakes on the Singularity.” That essay looked at how economic and other non-technical factors will slow down the practical effects of AI, and we should question the supposedly immediate move from AGI to SAI (superintelligent AI).
In part 3, I will consider past singularities, different paces for singularities, and the difference between intelligence and speed accelerations.
In part 4, I will follow up by offering alternative models of AI-driven progress.
In part 5, I will explain the difference between Singularitarianism and transhumanism.
In part 6, I will consider my 2002 debate/discussion with Ray Kurzweil.
Finally, in part 7, I will compare the Singularity model to explanatory models in economics.
Wait a minute. Gary Marcus doesn't think LLMs are the path to AGI??
;-)
Unique perspectives. A truly enlightened piece.