This is the first part of a likely series on the Singularity idea.
Here, I focus on two points: Economics and other non-technical factors will slow down the practical effects of AI, and we should question the supposedly immediate move from AGI to SAI (superintelligent AI). In part 2, I will follow up by offering alternative models of AI-driven progress. In part 3, I will explain the difference between Singularitarianism and transhumanism. In part 4, I will consider my 2002 debate/discussion with Ray Kurzweil and, finally, I will compare the Singularity model to explanatory models in economics.
I have been thinking about the Singularity idea for many years. The current essay is based on (with considerable new discussion) of a piece I wrote in 1998: “Singularity Meets Economy”, published in Extropy Online, October 19, 1998 – part of “A Critical Discussion of Vinge’s Singularity Concept,” also published in The Transhumanist Reader.
I read Vinge starting in 1984, and became familiar with his version of the Singularity. Similar ideas had been around even before that – various conceptions of accelerating technological change and prediction horizons. In the 1980s and early 1990s, I accepted something very much like the Singularity model of the future. It must have been sometime in the mid-1990s that I came to doubt the validity or plausibility of the Singularity.
My concerns about the Singularity idea have not dissipated over the last quarter-century. The intellectual and emotional gravity of the idea continues to pull thinking into certain directions while downplaying other possibilities. Some people have convinced themselves that the AI takeover and Singularity are just around the corner and have acted consistently. One person cashed in her 4o1(k) and spent it because what’s the point of saving when the AI’s are about to take over and radically change everything and probably destroy humanity? Others stopped saving or decided against having children. With some prominent AI experts predicting AGI in the next 2-5 years, we will likely see more of this.
I’m not the only one concerned about people ruining their lives due to firm belief in a humanity-destroying AI takeover. Zvi Mowshowitz conveys some similar reasons for questioning the inevitability of a drastically rapid and human-hostile Singularity:
I do not consider imminent transformational AI inevitable in our lifetimes: Some combination of ‘we run out of training data and ways to improve the systems, and AI systems max out at not that much more powerful than current ones’ and ‘turns out there are regulatory and other barriers that prevent AI from impacting that much of life or the economy that much’ could mean that things during our lifetimes turn out to be not that strange.
Perhaps before I reach official retirement age, humans will be extinct and the AIs will be chuckling over these cautions. Perhaps. In the meantime, I will continue saving, exercising, and otherwise taking care of my physical and emotional future.
Before I was a philosopher I was in love with economics. Many, perhaps most, Singularity enthusiasts come from computer science and related fields. It may be this difference in background that caused me to be reserved about the concept. It seemed to me that it amounted to extrapolating an exponential curve without regard for real world barriers imposed by economics, organizational structures, regulations, human psychology, and other factors.
The Brakes
After I wrote the 1998 version of this essay, I spent a decade deeply engaged in reading and reviewing high-level material for senior executives, especially on topic concerning the effects of new technologies on business models and strategies. During those years, I learned a lot about how businesses struggle to integrate new technologies and use them productively. This was a time of much discussion of the “productivity paradox” (or Solow computer paradox). This is the observation in business process analysis that, as investment in information technology increases, worker productivity may decrease, and was based on observations of these effects from the 1970s to the 1990s.
As one example of this: Business spend large amounts of money installing enterprise resource management (ERM) systems, including applications such as customer relationship management (CRM) and supplier relationship management (SRM). It was expected that, by implementing these sophisticated IT systems, productivity would be greatly improved. These expectations were often disappointed, at least in the early days, because the new IT systems did not fit well with organizational structures and incentives. For instance, a CRM system might enable salespeople to share information and leads but they won’t do that if they are likely to lose sales to other salespeople.
Another important example of the delay between the introduction of a powerful new technology and its emergence in productivity statistics is that of electrification. Electrification started in the 1890s but did not measurably boost productivity until around 40 years later, in the 1920s and 1930s. Initially, factories replaced steam engines with electric motors, but productivity lagged because they retained old layouts designed for centralized power. It was only after restructuring factories for decentralized, flexible electric use—along with complementary innovations like assembly lines—that economic benefits emerged, illustrating the time required for organizational adaptation.
Historical examples since 1800 reveal similar patterns of technological delays due to social, economic, and infrastructural barriers, suggesting that such factors could restrain the rate of change toward a technological Singularity. Consider the following:
Steam Engine Adoption (1800s): The steam engine, pioneered by James Watt in the late 18th century, promised to revolutionize industry, but its widespread productivity effect was delayed until the mid-19th century. Early adoption faced challenges like inadequate rail infrastructure, lack of skilled engineers, and resistance from workers reliant on traditional water-powered mills. It took decades of network expansion and workforce retraining for steam to transform transportation and manufacturing, as seen in Britain’s railway boom after 1840.
Internal Combustion Engine and Automobiles (Late 19th to Early 20th Century): Introduced in the 1880s, the internal combustion engine powered the automobile industry, yet mass productivity gains were not evident until the 1920s. Factors included limited road networks, high production costs, and cultural resistance to replacing horses. Henry Ford’s assembly line, perfected by 1913, and the subsequent infrastructure investment (e.g., U.S. highway system) were necessary to unlock economic benefits, delaying the technology’s full effect by over 30 years.
Personal Computing (1970s–1990s): The personal computer’s introduction in the 1970s promised a productivity revolution, but measurable gains were slow. Companies struggled with software compatibility, employee training, and resistance to workflow changes. It wasn’t until the late 1990s, with the internet and standardized software (e.g., Microsoft Office), that productivity statistics reflected widespread benefits, a lag of 20–25 years.
Internet and E-Commerce (1990s–2000s): The internet’s commercialization in the 1990s was hailed as a transformative force, yet productivity growth remained flat through the early 2000s. Businesses faced challenges integrating e-commerce with legacy systems, securing transactions, and shifting consumer behavior. Significant productivity gains only materialized after 2005, with the rise of platforms like Amazon, reflecting a 10–15 year delay.
These examples underscore that the adoption of powerful new technologies often requires not just invention but also extensive complementary changes—new infrastructure, organizational redesign, cultural acceptance, and skill development.
For the Singularity, which hinges on the rapid emergence of superintelligent AI, such delays suggest that even if artificial general intelligence (AGI) is achieved, the transition to a runaway intelligence explosion could be restrained by similar barriers. Misaligned incentives, inadequate global coordination (e.g., differing regulatory frameworks), and the time needed to integrate AI into diverse industries could fragment progress into a series of surges rather than a single, discontinuous leap. Thus, the historical pattern of delayed productivity challenges the Singularity’s assumption of immediate, exponential change.
Smarter implementation can bring gains in productivity more quickly. Very recent studies suggest that LLM AIs are already leading to major gains in productivity in software coding and in business writing.[1] My point was and is not that organizational and other factors will stop productivity growth but that a smoothly rising exponential curve may flatten out due to these facts. The gains may not be exponentially continuous; they may in surges followed by periods of consolidation and learning.[2]
I will not even attempt to guess at the multiple by which economic growth might speed up.
Of course, there is a lot more to be said on the relationship between the technical and economic aspects. Indeed, others have said much, much more! [3] Robin Hanson, with expertise in both AI and economics, has expressed some similar reservations. [4]
The Leap
Vernor Vinge presents a dramatic picture of the likely future:
And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far… If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened. And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era.
From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in "a million years" (if ever) will likely happen in the next century. [Vinge 1993]
The Singularity idea exerts a powerful intellectual and imaginative attraction. It’s the ultimate technological orgasm—an overwhelming rocket ride into the future. In one dynamic package, the Singularity combines ultimate technological excitement with the essence of Christian apocalyptic and millenarian hopes. Precisely because of this powerful attractive force, the Singularity idea deserves a critical examination. In this short contribution to the discussion, I want to question two assumptions embedded within the Singularity scenario. (These still receive little consideration many years after I wrote this.)
Assumption #1: If we can achieve human level intelligence in AI, then superintelligence will follow quickly and almost automatically.
Assumption #2: Once greater than human intelligence comes into existence, everything will change within hours or days or, at most, a few weeks. All the old rules will cease to apply. [6]
I have doubts about both assumptions. I also doubt that the drastic and discontinuous change in the second assumption necessarily follows from acceptance of the first.
In one dynamic package, the Singularity combines ultimate technological excitement with the essence of Christian apocalyptic and millenarian hopes.
When journalists and others are thinking of someone who talks about AI and the Singularity, there is a good chance they are thinking of Ray Kurzweil. Kurzweil’s timeline for AGI – AI capable of matching human intelligence across a wide range of tasks – was considered highly optimistic (or pessimistic you are worried about AGI). He is quoted as saying “We are going to expand intelligence a millionfold by 2045.” However, today (2025) he seems moderate compared to many others. Ray forecasts AGI by 2029 but the Singularity not until 2045. So he expects a gap of 16 years.
Vernor Vinge and Ray Kurzweil both contributed significantly to the concept of the technological Singularity, but their timelines and reasoning for the transition from artificial general intelligence (AGI) to superintelligence (often equated with the Singularity) differ. Vinge, in his 1993 essay "The Coming Technological Singularity," suggested that superintelligence would emerge "extremely soon" after AGI, implying a near-instantaneous leap due to recursive self-improvement. In contrast, Kurzweil, in works like *The Singularity Is Near* (2005) and subsequent updates, predicts AGI by 2029 but delays the Singularity to 2045, a 16-year gap. What are Kurzweil’s reasons for this delay, based on his published arguments and projections as of June 16, 2025?
Kurzweil justifies this delay with several factors: First, infrastructure for superintelligence (e.g., quantum computing, neural networks) requires over a decade to mature beyond 2029’s AGI foundation. Second, he envisions phased intelligence amplification through human-AI collaboration, spanning 16 years via brain-computer interfaces. Third, societal adaptation—overcoming resistance, regulations, and economic shifts—needs this period for AGI integration. Fourth, exponential growth faces non-linear plateaus and resource limits (e.g., energy), delaying the Singularity’s takeoff. Fifth, he ties 2045 to AI’s convergence with nanotechnology and biotechnology, requiring time for cellular-level enhancements like nanobots. Finally, Kurzweil calibrates his timeline with historical computational doubling trends, adjusting for past adoption lags (e.g., electricity), contrasting Vinge’s rapid "intelligence explosion" assumption. This gradualist view supports a surge model over a sudden leap.
My perspective on this aligns with Ray’s. I suspect the ramp up to SAI and to a possible Singularity will take even longer due to additional economic, psychological, cultural, and regulatory factors.
The awakening of a superhumanly intelligent computer is only one of several possible initiators of a Singularity recognized by Vinge. Other possibilities include the emergence of superhuman intelligence in computer networks, effective human-computer interfaces, and biotechnologically improved human intelligence. Whichever of these paths to superintelligence Vinge, like I.J. Good, expects an immediate intelligence explosion leading to a total transformation of the world.
I have doubts about both of the above assumptions. Curiously, the first assumption of an immediate jump from human-level AI to superhuman intelligence seems not to be a major hurdle for most people to whom Vinge has presented this idea. Far more people doubt that human level AI can be achieved. Many years ago, when I told Vernor Vinge that I could easily see AI reaching general human capabilities but did not see that it would then automatically or easily upgrade further, he was surprised and said that this was an objection he had not come across. (That was sometime in 1990s.)
My own response reverses this: I have no doubt that human level AI (or computer networked intelligence) will be achieved at some point. But to move from this immediately to drastically superintelligent thinkers seems to me doubtful. Granted, once AI reaches an overall human capacity, “weak superhumanity” probably follows easily by simply speeding up information processing. But, as Vinge himself notes, a very fast thinking dog still cannot play chess, solve differential equations, direct a movie, or read one of Vinge’s excellent novels.
On my point that the assumption that reaching human level AI is hard but then superhuman AI is easy is questionable, Singularitarian Supreme Eliezer Yudkowsky wrote: “This was the best objection raised, since it is a question of human-level AI and cognitive science, and therefore answerable.”
Slow down, you move too fast
A superfast human intelligence would still need the cooperation of slower minds. It would still need to conduct experiments and await their results. It would still have a limited imagination and restricted ability to handle long chains of reasoning. If there were only a few of these superfast human intelligences, we would see little difference in the world. If there were millions of them, and they collaborated on scientific projects, technological development, and organizational structures, we would see some impressively swift improvements, but not a radical discontinuity. When I come to the second assumption, I’ll address some factors that will further slow down the impact of their rapid thinking.
Even if superfast human-level thinkers chose to work primarily on augmenting intelligence further (and they may find other pursuits just as interesting), I see no reason to expect them to make instant and major progress. That is, I doubt that “strong superhumanity” will follow automatically or easily. Why should human-level AI make such incredible progress? After all, we already have human-level intelligence in humans, yet human cognitive scientists have not yet pushed up to a higher level of smartness. I see no reason why AIs should do better. Okay, one reason: AIs can be reconfigured far more easily than can a biological brain. A single AI may think much faster than a single human, but humans can do as well by parceling out thinking and research tasks among a community of humans. Without fundamentally better ideas about intelligence, faster thinking will not make a major difference.
All that I am questioning is the assumption that the jump to superintelligence will be easy and immediate.
I am not questioning the probability of accelerating technological progress. Once superintelligence is achieved it should be easier to develop super-superintelligence, just as it is easier for us to develop superintelligence than it is for any non-human animal to create super-animal (i.e. human) intelligence. All that I am questioning is the assumption that the jump to superintelligence will be easy and immediate. Enormous improvements in intelligence might take years or decades or even centuries rather than weeks or hours. By historical standards this would be rapid indeed but would not constitute a discontinuous Singularity.
I find the second assumption even more doubtful. Even if a leap well beyond human intelligence came about suddenly sometime in the next few decades, I expect the effects on the world to be more gradual than Vinge suggests. Undoubtedly change will accelerate impressively, just as today we see more change economically, socially, and technically in a decade than we would have seen in any decade in the pre-industrial era.
But the view that superintelligence will throw away all the rules and transform the world overnight comes more easily to a computer scientist than to an economist. The whole mathematical notion of a Singularity fits poorly with the workings of the physical world of people, institutions, and economies. My own expectation is that superintelligences will be integrated into a broader economic and social system. Even if superintelligence appears discontinuously, the effects on the world will be continuous. Progress will accelerate even more than we are used to, but not enough to put the curve anywhere near the verticality needed for a Singularity.
No matter how much I look forward to becoming a superintelligence myself (if I survive until then), I don’t think I could change the world single-handedly. A superintelligence, to achieve anything and to alter the world, will need to work with other agents, including humans, corporations, and other machines. While purely ratiocinative advances may be less constrained, the speed and viscosity of the rest of the world will limit physical and organizational changes. Unless full-blown nanotechnology and robotics appear before the superintelligence, physical changes will take time.
For a superintelligence to change the world drastically, it will need plenty of money and the cooperation of others. As the superintelligence becomes integrated into the world economy, it will pull other processes along with it, fractionally speeding up the whole economy. At the same time, the SI will mostly have to work at the pace of those slower but dominant computer-networked organizations.
The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years. Superintelligence may be difficult to achieve. It may come in small steps, rather than in one history-shattering burst. Even a greatly advanced SI won’t make a dramatic difference in the world when compared with billions of augmented humans increasingly integrated with technology and with corporations harnessing human minds linked together internally by future versions of today’s enterprise resource planning and supply chain management software, and linked externally by extranets, smart interfaces to the Net, and intelligent agents.
How fast things change with the advent of greater than human intelligence depends strongly on two things: The number of superintelligences at work, and the extent of their outperformance. A lone superintelligence, or even a few, would not accelerate overall economic and technological development all that much. If superintelligence results from a better integration of human and machine (the scenario I find quite likely), then it could quickly become widespread and change more rapid.
But “more rapid” does not constitute a Singularity. Worldwide changes will be slowed by the stickiness of economic forces and institutions. We have already seen a clear example of this: Computers have been widely used in business for decades, yet only in the last few years have we begun to see apparent productivity improvements as corporate processes are reengineered to integrate the new abilities into existing structures.
In conclusion, I find the Singularity idea appealing and a wonderful plot device, but I doubt it describes our likely future. I expect a Surge, not a Singularity. But in case I’m wrong, I’ll tighten my seatbelt, keeping taking the smart drugs, and treat all computers with the greatest of respect. I’m their friend!
1. In a detailed and brilliant essay, Stephen Wolfram explains how GPT works (and how it does not work) and why, despite its surprising skills in language, may well not extend its capabilities to other areas of cognition. [Wolfram 2023]
2. I also consider the effect of economic factors in “Singularity Meets Economy”, in Extropy Online, October 19, 1998 – part of “A Critical Discussion of Vinge’s Singularity Concept.”
3. Robin Hanson: “Is a Singularity Just Around the Corner? What it takes to get explosive economic growth.” Journal of Evolution and Technology June 1998. Vol. 2. Hanson, Robin (2008). “Economics of the Singularity.” IEE Spectrum, June 2008. Eliezer Yudkowsky. “Intelligence Explosion Microeconomics.” Technical report 2013-1. Berkeley, CA: Machine Intelligence Research Institute.
4. See Robin’s “Dreams of Autarky” included in the lengthy debate between Robin and Eliezer Yudkowsky in The Hanson-Yudkowsky AI-Foom Debate eBook. Machine Intelligence Research Institute.
5. “Practical Advice for the Worried.” Don’t Worry About the Vase, March 1, 2023.
6. For a lengthy debate on take-off, see Hanson & Yudkowsky 2013.
NEXT: Singularity, Surge, and AGI
Very interesting take.
Exceptional insights.