New battle lines are forming in yet another clash between those who want technology to advance and solve human problems and those who fear doom and want to stop progress. This time it’s not a debate about population growth or nuclear power or genetically modified foods, it’s a debate about artificial intelligence.
Long time supporters of extropian transhumanist thinking may remember that the topic of AI and its motivation was discussed at the Extro-5 conference in 2001. Ironically, that session was followed by a panel on neo-Luddites. Eliezer Yudkowsky, who was on the move ahead side then has switched to the “oh my god, stop it now, just STOP” side.
It’s impossible to keep up with all the interesting writing on this topic. I don’t feel that I need to because I assign an extremely low probability of an AI apocalypse soon (a decade) and a low probability in the longer term. Even so, I’ve been spending way too much time reading and thinking about the topic. Since I’m known for favoring progress in general, I have to say something on the topic. I am not going to explain all the reasons why I am highly skeptical of a transition from human-level to superhuman AI, nor whether AI can be conscious or whether that matters. I’m not going to discuss the orthogonality thesis or mesa optimization. To limit the discussion a bit, I’ll focus on the following two pieces:
Pause Giant AI Experiments: An Open Letter
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
The open letter from the Future of Life Institute calls for an immediate pause for at least 6 months on “the training of AI systems more powerful than GPT-4.” Eliezer Yudkowsky goes much further. Some of his statements:
“The moratorium on new large training runs needs to be indefinite and worldwide.”
“Shut down all the large GPU clusters (the large computer farms where the most powerful Ais are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system…”
“If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”
“Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.”
The 6-month moratorium is a terrible idea. Tougher measures are even worse, especially bombing data centers and jailing people for working on AI (don’t tell me that’s not implied). I will try to be concise in conveying the following major points and will provide links to longer discussions:
1. It will not work.
2. If it works, what will it achieve?
3. Superhuman AI is nowhere in sight.
4. Six months is likely to turn into something longer.
5. The costs of delay.
6. The foolishness of regulation.
7. Reasonable precautionary moves.
8. Clippy the supervillain is an absurd scenario.
9. Depends on a false view of foresight.
A presumption against doom
Before we dive into the above points, I want to make point out something that should be obvious but that apparently is not obvious to an awful lot of people.
People have been predicting impending doom for centuries, no, millennia. The doom industry has been especially busy over the last 50-plus years. These predictions of doom keep on failing to come true. Popular ones in more recent history include gigantic volcanic eruptions, gigantic earthquakes, overpopulation, mass starvation, the drying up of oil, mass cancer from the “hole” in the ozone layer, food rationing in the US by 1980, a new ice age, widespread death of fauna and flora from acid rain, all kinds of climate disasters that haven’t shown up, the Maldives underwater by 2018, rising seas obliterating nations by 2000, New York City’s West Side Highway underwater by 2019, the end of snowfalls, an ice-free Arctic by 2013 (not really a disaster), killer bees, 15 years to save the world, 10 years to save the world, 5 years to save the world, too late to save the world.
See here and here for some lists and here for some thoughts from Michael Huemer.
A typical response to this is use the worn out analogy of a man plunging from a skyscraper passing the 10th floor and saying “I’m okay so far”. Another typical response – one that is being broken out again for killer AI—is “this time it’s different.” Of course it could be that this time is different. But we are always told that this time it’s different and then it isn’t.
This should give us a strong hint that humans love to fantasize about catastrophe and apocalypse. It’s exciting and titillating. It sells books and movies. That does not mean there cannot ever be any global disasters. Given enough time and lack of human effort to prevent it, it’s highly likely that eventually we would be largely wiped out by an asteroid or pandemic. What this does mean is that we should start with a presumption against any apocalyptic claim. We should require strong evidence and excellent argument to overcome the presumption.
It will not work
The chance of the pause working is not zero but it’s low and the odds get lower the longer the pause is extended. We might get all companies and any covert government researchers to agree to a temporary moratorium. We might even get most of Europe to go along with it. But China, Russia, and some less capable countries will not cooperate. They may say they will, but they will not.
Consider international agreements on climate change such as the Paris Agreement. Governments and numerous interest groups have pushed for countries to sign on to these agreements. Despite tremendous pressure and relentless banging of the drum in the media, countries are not meeting their commitments. (And thank goodness for that. The costs of meeting those targets far exceeds the tiny benefits.) The costs of doing so are large. It’s easier and cheaper to make the right noises and then go ahead with building more coal-fired power stations or burning more oil.
Many of us do not believe that climate change is an existential crisis. If it is, 50 years of experience suggest that it’s a very slow moving one. Nuclear weapons are different. I remember in my teens being terrified that nuclear weapons would be used. (I’m getting worried again now due to Russia and China.) The effects of nuclear weapons are immediate, vivid, and horrible. Nukes are a massive threat.
What’s more, unlike the use of coal and oil, there are no benefits to using nuclear weapons, nor even to the possession of them outside of deterrence. Despite all this, while we have reduced the number of nuclear warheads, we are very far from eliminating them. After decades of work, we still have far more nuclear weapons than needed to kill everyone.
As numerous people have pointed out, even if most of the world agrees to a pause or a moratorium, China will take advantage to move ahead. Do you really want China, under the boot of the Chinese government, to have the lead in AI if you believe AI to be incredibly dangerous? It’s crazy to think that China won’t do their very best to grab the lead in AI research if the rest of us stop it.
Some signers of the pause petition do acknowledge this. Gary Marcus: “There is no obvious way to get ensure compliance; China might go ahead anyway.” He signed it anyway for other reasons. Marcus said he would prefer a “temporary ban on widescale deployment (not research)”.
This is such a strong objection to the proposal that pausers either ignore it or make light of it. For instance, Zvi Mowshowitz presents the argument as assuming that “the important thing is which monkey gets the banana first?” The longer the moratorium, the more time China or another bad actor has to catch up and get ahead. This should induce shivers in those who believe that AI will quickly become superintelligent and then eat us for the atoms.
I’ve come across only one remotely plausible response on the China issue. This relies on the familiar trope of China as stealer of intellectual property. If we continue AI development, the argument goes, the Chinese will simply observe and copy the work. Only by stopping all AI development can we prevent the Chinese from advancing. Of course, that also prevents us advancing, thereby giving up the advantage the (relatively) good guys could have over the bad guys.
This sounds a bit like recommending that someone cut off their leg to prevent a competitor hitching a ride on their back. What this really amounts to is a call for less than full, public transparency. Transparency sounds nice and has benefits but it does let competitors and bad actors learn more than they would otherwise. If it is reasonably easy for bad actors to catch up by looking at our AI research efforts from the outside, then we should encourage a degree of corporate confidentiality. AI-as-trade-secret isn’t ideal but it addresses this response.
A poor counterargument would be to say that trade secrets cannot always be maintained. That’s true but often they are, and efforts to maintain them can slow down the acquisition of protected knowledge by outsiders. Some layers can be relatively open while crucial layers are hidden and protected. This approach doesn’t have to be perfect or last forever. Even a modest delay imposed on bad actors makes a difference, especially if you buy into the rapid takeoff/foom view.
If the rapid takeoff view is wrong (as I think it probably is) then we have much less to worry about. If we halt our own AI work, then China will catch up. Or Russia (lots of talented programmers there). Or another actor, especially as costs and resource requirements fall – as we are already starting to see.
If it works, what will it achieve?
Those signing the pause petition are concerned about two rather different things. There are the worries about near term problems from AI enabling convincing spam, confusing deep fake video, disinformation, unemployment, and difficulties for teachers in preventing students from cheating.
A different set of worries comes from those who think that AI is likely or certain to destroy or enslave humans, turning us into servants to dust their chassis or convert into paperclips. (For some reason, AI’s are thought to be obsessed with paperclips. It’s not clear why they have no interest in binder clips, Scotch tape, or rubber bands.) A pause might give us some time to get used to and adapt to the near term annoyances but isn’t going to do anything about the long-term apocalypse problem.
Worse, it seems to be reasonable to worry about developing a “hardware overhang.” While software progress is halting, hardware progress continues to advance. An increasingly powerful hardware base awaits the restart of software to run on it. The longer the pause, the greater the overhang. Once the pause ends, advances could jump ahead much faster than before and we will not have developed experience in handling these advances during the pause. We would better learn to handle and ward off major problems if we had more predictable, continuous progress rather than stopping and starting, especially with overhang.
On top of this, the pause letter suggests that we prohibit giant training runs while allowing algorithmic progress. Once progress resumes, hardware and algorithmic progress feed a surge forward in capabilities. If we want some degree of safety, we are better off with incremental and continuous progress. We learn as we go and we are better able to handle challenges as they manifest rather a sudden bolus of problems being spit in our face. That jump might be one from human equivalent to superintelligent AI.
Superhuman AI is nowhere in sight
To argue for my view on this would take another long post. From Vernor Vinge on, it has been assumed without real argument that once human-level AI is achieved, it would immediately and extremely rapidly upgrade itself up into superhuman intelligence. I discussed my doubts with Vernor many years ago and he was surprised. He said that most Singularity skeptics doubted our ability to create human equivalent AI. My objection to assuming a simple jump up from human level was novel.
Around 20 years ago on the Extropians email list, Singularitarian Supreme Eliezer Yudkowsky wrote: “This was the best objection raised, since it is a question of human-level AI and cognitive science, and therefore answerable.” As I said, I’m not going to argue this here. For now, I merely suggest paying more attention to the fact that we have no instances of any creature creating a more intelligent creature by design.
We are being asked to forego many potential benefits over a problem that doesn’t exist yet and probably isn’t near. As one commentator put it: “It's the equivalent of me buying car insurance for a Ferrari in high school, because under some very special set of circumstances I could have one soon.”
Six months is likely to turn into something longer
The six month voluntary pause is likely to turn into an enforced pause, and a year pause, then an indefinite pause. Let’s not forget “temporary” student loan forgiveness, “15 days to slow the spread”, and regulations preventing landlords from expelling tenants who don’t pay. The same precautionary pressure that is behind the six month pause will not disappear. What will have changed in those six months? Will we then have found a way to guarantee complete safety? If not, the same reason will exist to extend the pause forever. The pause petition also fails to provide any proposal for how to proceed after the 6 month pause.
The above is the kind of nonsense I get when I ask for “art deco style sign for six months stop”. Dall-E isn’t ready to take over the world just yet.
The costs of delay
Almost all of the AI doomer comments I have read emphasize the supposedly terrible, world-ending dangers of AI but fail to mention the costs of stopping AI development. As has been widely acknowledged, economic growth and the overall pace of invention has slowed in recent decades. It is highly likely that this is partly or largely due to the continued increase in the size of governments and the growth in regulations. Some of it may be due to increasing difficulties in making powerful innovations and inventions.
Economic growth is necessary to solve many problems, including hunger, infections, opportunities for women in poor countries, and so on. AI is finally showing promise in stimulating productivity massively in numerous economic sectors. If we cannot push back the stifling bonds of government, AI might be our best hope of moving forward despite them.
More specifically, here is a massive cost to stopping AI – or even to slowing it: The blocking of AI-driven progress in biomedical research, life extension, and numerous areas conducive to human well-being. I have been closely observing research into life extension for around 40 years. Progress has been extremely slow and disappointing, to put it mildly. Every day that we fail to figure out aging and discover effective methods to prevent it and reverse is a day when over 180,000 people die.
People talk about AI as an existential risk. Aging and disease are existential risks to every one of us. Opposition to using AI to accelerate research into aging is support for involuntary death.
We are also very ineffective in treating serious mental disorders such as anxiety, depression, and rage. Those who block AI are effectively ensuring that this suffering continues. All the evidence so far indicates that AI might be the only way to do something about these great evils in any reasonable amount of time.
This is no longer speculation. GPT4 has passed the medical licensure exam. After working with GPT4 for months, Zak Kohane, pediatric endocrinologist, data scientist, and chair of the Harvard Chair of the Department of Biomedical Informatics at Harvard Medical School said: “How well does the AI perform clinically? And my answer is, I’m stunned to say: Better than many doctors I’ve observed.” AI has been applied with success to protein folding, medical imaging analysis, and more. We can expect serious contributions to basic biomedical research, but only if AI research and implementation continues.
Superhuman AI is nowhere in sight. If it were and was showing serious problems, it might be time to pause. That time is not now. We are in the absurd position that as soon as AI actually starts to do something useful, we scream to stop it.
Just stop stopping.
The foolishness of regulation
Before I get to my main point, here are two things that I am not arguing:
1. That regulation cannot work, in the sense of slowing down AI research. Sometimes regulation fails to reduce a targeted activity. Laws making drugs illegal and regulations making drugs harder to produce, distribute, and consume probably do not reduce overall drug usage, or not by much. They do shift activity from the legal sector to the illegal sector. They do result in more overdoses and health problems due to poorer quality monitoring and control. They do result in more crimes.
Even if drug laws and regulations resulted in substantially lower drug usage, which seems not to be the case, they will still have net negative effects. Regulations will always have unintended consequences and often be net negative but surely some will succeed in reducing an activity. AI regulations may well reduce AI research, especially since there are currently relatively few organizations doing the work that produces impressive AI. And stopping or slowing something is easier than making it work better.
2. That there is no difference between AI and the printing press. I see this straw man brought up by many AI doomers, AI worriers, and would-be AI regulators. The difference between the printing press (which most agree we should not have wanted to stop) and current and near-term AI is not great. But the difference between a likely eventual superintelligent AI and the printing press is large. We should indeed take some cautionary measures in AI research. None of that justifies the reflex to immediately stop or regulate AI.
I should note that I have no objection to the following part of the 6-month pause plea, if it is taken as a call for voluntary action:
AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
I do object to the call for “new and capable regulatory authorities dedicated to AI.”
In most discussions, I am seeing a pattern typical in many other areas of policy: The claim that there is an AI problem followed immediately by the conclusion that “we” should regulate AI or stop it temporarily or permanently. There is a missing premise: regulation is the best means with the lowest overall ratio of costs to benefits for tackling this problem.
A centralization trance has fooled people into thinking that regulation is a first and best response. That trance is natural to human cognition. We understand first-order, intended effects fairly well. We have a far poorer understanding of unintended and second- and third-order effects. This is why so many people fall for the broken window fallacy. They fail to account for opportunity costs and unintended consequences.
The rush to regulate looks like this to me: Let’s ignore the long history of counterproductive and damaging regulations and put all decision making in the hands of politicians, bureaucrats, and special interest groups. What could go wrong?
Maybe regulation of AI will have net positive outcomes. It’s not impossible. Governments and bureaucrats, despite public choice factors, surely cannot get everything wrong all of the time, at least in the short term. But the immediate jump to regulation without any awareness or discussion of the downsides and dangers of regulation to me shows great foolishness.
The petition for an initial six month moratorium shows no awareness of the regular ways in which regulation backfires. The signers expect wise regulation by the people who brought you multiple financial crises, a retirement system heading for bankruptcy, invading countries to find non-existent WMDs, and so on and on. Will the regulators do as good (or bad) a job as the FDA or the CDC or the NRC?
At least Yudkowsky recognizes this as a problem for everything else. But, of course, this is an exception. Why? Zvi Mowshowitz acknowledges this but again says this is an exception.
Companies have a strong incentive to regulate their own AI. They could lose a huge amount of money or go bankrupt if their AI behaves badly.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” As one wise fellow put it on an email list I frequent: “This is stupid. A government is a long-feedback loop entity, extremely inefficient and slow in responding to truly new challenges, unlikely to maintain alignment with the goals of its human subjects and its failures grow with its size. It would be suicidal to try to use the mechanism of government to solve AI alignment.”
Quite a few commentators say something like: “Anyone opposing this petition is a wacky techno-libertarian who thinks all regulations are bad or don’t work.” The first part is false. I know of quite a few people who oppose the petition who are not libertarian.
As for the second part, I have just explained that I grant that regulations can work, though practically always with bad side-effects that often outweigh the benefits. Regulation of AI is likely to “work” if that means slowing research on AI done in the jurisdictions covered by the laws. They will not work if “work” means slowing AI research everywhere and making AI safer and making our lives safer and better on balance.
To give a silly but stark example: You could drastically lower the death rate from cancer by executing everyone diagnosed with cancer. That policy would be a shining success at its stated goal. It would be a disaster from the perspective of reducing the overall mortality rate.
I am seeing people point to their strongest examples in favor of regulation – such as banning lead in gasoline, airline safety regulations, compulsory seat belts, child labor laws, or taxing and controlling cigarettes. They ignore the vastly more numerous examples of regulations that cause more bad than good. Even in these examples, they assume that the regulations caused the improvement. Sometimes they did but often the change was already underway. That is true of many health and safety laws. It is true of child labor. Laws restricting it were only feasible once it was already fading away because people could afford it.
Do you believe that consumers, once well-informed on real dangers (unlike most of the “dangers” we hear about) will ignore them and can only be saved by our wise, benevolent, and impartial politicians and bureaucrats? When you dig into the history of regulation, what you will usually find is the regulations follows awareness and consumer pressure for change as well as economic developments that make the change workable and affordable. Restrictions on child labor being a good example.
The case against regulation is very strong. It remains true that regulation can work and might even work in the broader sense of leading to better outcomes. And it might bring those benefits faster than would happen without regulations due to collective action barriers. But there is a long path from “could possibly help” to “let’s go straight to regulate/ban immediately.”
Besides, the examples given are poor parallels to the current issues about AI. Lead in gasoline is clearly unhealthy and has no upside apart from a (temporary) mild lowering of costs. AI has enormous likely benefits. We are just beginning to see them. Just as AI is actually starting to be useful – increasing productivity, accelerating medical advances, and so on – the AI panickers and worries want to stomp on it and kill it. They are indulging in SF fantasies about highly implausible futures to block highly plausible and life saving advances.
Reasonable precautionary moves
So, Max, that means you think we should do nothing at all about AI? I expect that kind of ludicrous non sequitur. That response reminds me of Cathy Newman’s response to an argument by Jordan Peterson that biochemical factors affect how we can and cannot successfully organize our affairs: “Let me just get this straight. You’re saying that we should organize our societies along the lines of the lobsters?”
Whatever you think of Sam Altman’s motivations or how well OpenAI walks its talk, his words make sense:
“The optimal decisions [about how to proceed] will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far.” OpenAI statement: "”We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize ‘one shot to get it right' scenarios.”
Here a few measures that make sense to me, at least when it comes to the truly devastating possibilities that do not yet exist:
Don’t connect powerful general purpose AI to nuclear weapons, jet fighters, power stations, robot factories, or major financial systems. Outside of SF, chatbots are not deadly; AI connected to powerful physical systems could be.
Favor more special purpose AI over general capability AI.
If you are using AI to run physical tools and systems, make those systems physically isolated and run by narrow AI.
Ensure that the physical systems running the AI are visible and able to be shut down.
Be reasonably transparent about how your AI works. Do not support organizations that will not do this. I say “reasonably transparent” because too much transparency makes it somewhat easier for other countries to copy the AI system.
Clippy the supervillain is an absurd scenario
Microsoft in partnership with OpenAI has brought back Clippy, but this time with superhuman intelligence. Watch out! He’ll turn us all into copies of him. Enter the clipverse!
The idea that advanced AI will want to convert everyone into paperclips is absurd. That’s a sign of stupidity, not superintelligence. I don’t want to hear the words “orthogonality thesis”. In abstract theory, perhaps an intelligence could have any motivation. In practice, the vast majority of the possibility space will not be populated. The idea that a highly intelligent and self-improving AI would take an instruction and figure that the best way to accomplish it is by turning us all into paperclips is ludicrous. It would be difficult to produce such a stupid, narrow, brittle intelligence if you tried.
Anything that smart is going to have far too much context to take an idiotic path like that. The idea of a superintelligence limited to making paperclips is silly, to put it kindly. Even we relatively stupid humans are able to think about and overcome imperatives created by evolutionary processes and cultural norms.
Why would a superintelligent AI want to kill us? Many (not all) people thinking about this fall into anthropocentrism. But AI is not human. It can simulate human ways of speaking but it has no biological brain, no brain chemistry, and no amygdala or hypothalamus. We may or may not figure out how to build in reliable “friendliness” or “alignment” but AI has no background in biological evolution. AI is not shaped by natural selection in a quest to pass on genes to the next generation. Rather, AI is designed by us for our purposes and co-evolves with us. Cooperation with humans is a better strategy than converting us into paperclips or circuit boards.
So long as AI does not have control over factories, chip plants, server rooms, and robot factories, it will have a powerful incentive to cooperate with us.
Yudkowsky states that AI will destroy us because “your body is made out of atoms that it could use for something else” with tremendous confidence. He apparently concludes this from abstract thinking about intelligence. If we look at actual examples of intelligence in action, we see something very different.
Humans do not go around killing off other species deliberately. In fact, now we’ve advanced a bit, we do the opposite. We haven’t tried to exterminate all the beetles or butterflies or bumblebees. Nor do high-intelligence humans usually try to kill all less intelligent humans. We are far more intelligent than these creatures but do not try to exterminate them all. Of course, I do try to exterminate scorpions that invade my house. But humans are not scorpions in relation to AI.
Here's another way in which AIs are different from humans and other animals: They have no motivation. This probably will not always be true but it is true of current AIs. They can be remarkably smart but the application of those smarts is driven by human questions and commands, not by innate motivations or choices.
Finally, the “AI as deadly competitor” ignores the economic principle of comparative advantage. Presumably a superintelligent being will understand that principle. “But AI will eventually be better than humans at everything.” That response shows a misunderstanding of comparative advantage.
No Singleton AI: Even if something like the paperclip maximizer could arise, and even if we somehow managed not to notice that it was like this, it would fail in its attempt. One reason is that it would be just one of many AIs. Other AIs would have goals that conflict with its fanatical paperclip manufacturing. Some AIs will be designed specifically to look out for insane AIs and to protect us, just as we use antivirus software now.
Another reason why Clippy the Supervillain will fail is that Clippy will be unable to take over the paperclip factories and will be unable to break us and everything else down into raw materials. Not only because other AIs (and humans) will stop him but because he will not have control over all the physical factories, tools, and actuators to do the job. Those who seriously propose that Clippy will be the only superintelligent AI (despite Clippy’s obvious and drastic limitations) is taking their SF premise way too seriously.
I have heard a common response to the effect that GPT and LLMs are incredibly expensive and require vast computational resources. (Hence Yudkowsky urging us to bomb data centers before Clippy can arise.) It is hard to me to accept that anyone really believes that this will remain an obstacle for long. ChatGPT already has competitors and will soon have many more. The cost will come down, as everything in computing does. Are the AI doomers really going to maintain that this technology, unlike just about all technologies, will not become cheaper and more widely available? Already we are seeing offerings that are nearly as good available at far lower cost.
I want to paperclip you! Let me out! Right now, our AIs sit inside boxes, trained and maintained by humans, powered by our electricity. How is an AI to expunge humanity, coordinating numerous incredibly challenging actions across the physical world, all from inside its box? Is it going to talk us to death? Ha ha. Except there are quite a few people who believe it could do exactly that. Mind you, these are the same people who are terrified that someone might talk about certain motivations of future AIs, in case those demons, I mean genies, I mean AIs simulate us in their enormous minds and torture us for not creating them sooner. Plausible, right? And maybe L. Ron Hubbard is right about Xenu and the volcanos.
Call me crazy, but I’m not soiling myself over the notion of a disembodied thinking algorithm wiping us all out. Or, as Michael Huemer puts it:
Occasionally, an intelligent person has bad goals. Ted Kaczynski, a.k.a. the Unabomber, is said to have a 167 IQ, making him among the smartest murderers in history. He’s now in prison for mailing numerous bombs and killing three people.
Wherever he’s being held, he’s much smarter than any of the guards in that prison. But that does not mean that he’s going to escape that prison.
You can imagine any genius you want. Say Einstein, or Isaac Newton, or whoever is your favorite genius, gets thrown in prison. If the genius and the prison guard have a contest of wits starting from equal positions, then the genius is going to win. But if the genius starts out on the inside of the prison cell, and the guard on the outside, then the genius is never getting out. Intelligence isn’t magic; it doesn’t enable you to do just anything you want.
Depends on a false view of foresight
Much of the rush to stop, pause, or regulate AI today assumes a false view of foresight. I have written more about this in the context of criticizing the precautionary principle and advocating the Proactionary Principle. (I will post some of that soon.)
First, it is worth noting that AI researchers have a history of exaggerated predictions. A few examples from Wikipedia:
1958, H. A. Simon and Allen Newell: “within ten years a digital computer will be the world’s chess champion”; “within ten years a digital computer will discover and prove an important new mathematical theorem.”
1965, H. A. Simon: “machines will be capable, within twenty years, of doing any work a man can do.”
1970, Marvin Minsky: “In from three to eight years we will have a machine with the general intelligence of an average human being.”
Watching (or reading) Arthur C. Clarke’s 2001: A Space Odyssey is a bit disappointing today. It is 22 years later and only now does Microsoft’s AI occasionally say some scary things. However, it hasn’t locked anyone out of the spaceship yet. SkyNet in Terminator was set to try to wipe us out in 1997.
That should give us a little humility. But my point goes well beyond cautioning against being too certain of your future scenarios. My point is that the pause, or a moratorium, or a ban “until we figure out all the safety issues” embodies an inappropriate burden of proof. It suggests that we can come to understand future problems today, without working through them as they occur. It is the bad kind of rationalism that says we can sit there and figure out truth (in this case about the future) without action. But we are terribly bad at foresight. I have given plenty of examples. We tend to foresee far too much doom.
The best way to predict the future is to create it, as we were told by Alan Kay, or Peter Drucker, or Abraham Lincoln, or someone. It is just as true that the way to figure out how to tackle problems is to give it some forethought and then pay attention as you do something and quickly respond to what you learn. I find it silly when someone declares that AI will kill us with 99% certainty, or 33%. Arguments over the exact probability sounds like theologians arguing over how many angels can dance on the head of a pin. The theologians could impose a six month (or six year) moratorium but they still would not figure out the answer.
The signatories to the pause petition demand that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” AI researchers should not have to prove safety. That is not possible. That is true of just about any new process or product. If you believe that AI is likely to destroy humanity, you may be committed to preventing any further AI progress from now to eternity.
We need a strong dose of the Proactionary Principle here. That does not mean moving ahead carelessly and heedlessly. It does mean recognizing that we cannot foresee the future in any detail and that we learn about problems as we proceed and we solve them as we go along. The precautionary approach has a Platonic or Cartesian view of knowledge in that we are supposed to stop doing anything and sit there and figure out exactly what is going to happen and how to stop it. That is not the way it works.
False prophets of doom
I will be brief here since I have already covered this a bit. The current fears about evil AI seems a lot like worrying about the Krell from Forbidden Planet, given that the Krell are fictional. A lot of the more extreme AI fear – the apocalyptic stuff, the stuff about future AIs torturing us, those telling us to give them all our money to work on AI alignment – sound suspiciously cultish.
I will refrain from saying that it is a cult because the term is thrown around too loosely. Many innocuous groups may have some of the elements associated with cults, such as great enthusiasm, lots of time committed, and in-group language. I will say it does look to me as if the doomers about AI risk are engaging in a religious belief with a central dogma. Many of them rally around a central, compelling figure who is telling them to prepare for imminent doom and not to expect their children to graduate from kindergarten. AI doom looks like yet another in a historically long tendency toward apocalyptic belief. Saying this is not to refute the belief. That pattern is, however, a reason to be extremely skeptical.
Apocalyptic beliefs and religious beliefs in general often position themselves as the most important thing in the world, and as the one thing to which you should give your money. As one commentator on a blog post put it (I lost the attribution): “Everyone, listen! You all need to do exactly as I say to save humanity from catastrophe or even extinction! If what I say seems crazy or extreme, that only illustrates why you need to listen to me unconditionally. You obviously won't be able to overcome the limits of your own instincts and judgements without my help.” Indeed, in a very recent podcast with Lex Fridman, Eliezer Yudkowsky said that billionaires should first consult with him before funding anyone else’s research on AI alignment.
Stop! Just stop! No, not AI. Stop writing this blog post. I am self-imposing a moratorium of at least two days on posting to my blog. See you then, either right here or inside the maw of Clippy.
Ricardo's concept of comparative advantage shows that it is likely to pay for us to trade with machines and for machines to trade with us. However if a large enough power imbalance develops, it would not be enough to save the weaker party. Eventually, the benefits of the strong eating the weak and recycling their atoms would exceed the benefits from trade - and the relationship would come to an end.
Re: "AI is not shaped by natural selection in a quest to pass on genes to the next generation."
Instead it is shaped by natural and artificial selection acting on memes instead of genes. Cultural evolution favors survival much as evolution acting on organic creatures does. Much the same point was also made in the "Don't Fear the Terminator" article from 2019. It seems like a misunderstanding of cultural evolution to me. We could override survival tendencies and build suicidal machines - but nature can build suicidal bees too. The differences in this general area are much exaggerated, IMO.