Photo by Grant Ritchie on Unsplash
If Bill Joy had had his way, important areas of scientific research and technological progress would have been shackled more than two decades ago. His dire warnings sound very much like the wailing we are hearing from some parties today over AI progress. In another 20 years, we will see that that AI doomsters were just as wrong as Joy.
I wrote a rebuttal to Joy back in 2000, “Embrace, Don’t Relinquish, the Future” which was published on Ray Kurzweil’s website and in a 2006 book. Here is that essay, preceded by a new introduction from my forthcoming collection of essays.
Embrace, Don’t Relinquish, the Future
By Max More, 2001
The publication of Bill Joy’s “Why the Future Doesn’t Need Us” in Wired magazine in 2000 stirred up people on both the pro and con sides of technological progress. Joy’s piece received attention not only because people are always attracted more to messages of fear and doom but also because of his position as Chief Scientist at Sun Microsystems and as a venture capitalist – a company based on strong technological progress.
Joy proclaimed the potentially deadly threat to humanity posed by three groups of technology: genetic engineering, nanotechnology, and robotics. Robotics has been replaced with artificial intelligence since robots without AI are not terribly frightening. Taking the most optimistic technologists at their word, Joy feared that smart robots would replace or dominate humans in the near future. He especially feared the self-replicating powers of nanotechnology and genetically engineered plagues and pathogens: “It is most of all the power of destructive self-replication in genetics, nanotechnology, and robotics (GNR) that should give us pause.”
His concerns echoed those of Theodore Kaczynski, the “Unabomber” as Joy himself noted. He feared that accidents and abuses from GNR technologies “are widely within the reach of individuals or small groups” for the first time. While his concerns are reasonable to a degree, the problem comes from his response, from his proposed solution of relinquishment. Rather than entering an arms race of good vs. bad applications of GNR technologies, we should abandon them.
The following essay was one of the first to critique Joy along with commentaries by Ray Kurzweil, John Seely Brown and Paul Duguid, and John McGinnis. Our critiques focused not on his expectations of technological progress but on his assumptions and especially on the futility and danger of his policy of relinquishment.
Joy’s position has been described as “neo-Luddite” but that is unfair to Luddites. (Despite this, he is a venture capitalist, investing in GNR technology companies.) The original Luddites were understandably upset at the growing loss of jobs in the early 19th Century due to machines replacing workers in wool and cotton mills. This was a transitional effect but difficult for many workers and artisans at the time. Their very real job losses were exacerbated by the Napoleonic Wars and protesters were punished violently by the British government. By contrast, as of 2023, despite much talk of labor displacement, AI has developed along with record low unemployment (in the USA).
From my point of view in 2023, it is interesting to observe how technophobes have shifted their concerns from nanotechnology and genetic engineering to superintelligent AI. The first two topics receive little fear-based attention but AI has numerous entire institutes devoted to it and the internet discussion is like a thunderstorm that keeps hammering down and soaking places you would think beyond the weather.
As you read “Embrace, Don’t Relinquish, the Future” and the two that follow, you will see how Joy’s piece probably played a role in spurring me to create the Proactionary Principle.
When a scientist publishes a paper, her peers expect to see evidence that she has read prior work relevant to her topic. They expect the scientist to have studied the field thoroughly before contributing a paper, especially in a controversial field. Bill Joy, as former Chief Scientist at Sun Microsystems, should understand this. In reading his essay “Why the Future Doesn’t Need Us” I was struck less by his message than by what his words revealed: weak research into existing thinking about the implications of future technologies. Compounding this error of omission were his unrealistic thoughts about “relinquishment”, and his slighting of those who have deeply considered these issues as lacking in common sense. At the same time, I appreciated his courage in publicly laying out his fears and stimulating wider discussion.
Joy’s pessimistic assessment of the dangers of advanced technologies differs greatly from my own. Some threats are real, and the balance of benefits over harms clearly depends greatly on the choices we make, but I see the most likely outcomes as being more benign. That disagreement, though significant in itself, stands independently of my present concern. Even if I agreed with Joy’s apocalyptic vision of technology run amok, I would still feel compelled to challenge his call for the relinquishment of the “GNR” technologies of genetic engineering, molecular nanotechnology, and robotics (and all associated fields).
Having pondered these issues for many years, from technical, economic, political, and philosophical perspectives, I reject Joy’s relinquishment policy on three grounds: First, it is unworkable. Second, it is ignoble. Third, it would result in authoritarian control while still failing to achieve its purpose. I will leave the last objection to others and focus on the first two.
Shoot Off First, Ask Questions Later
Joy says that a conversation between inventor-entrepreneur Ray Kurzweil and philosopher John Searle ignited his apocalyptic thinking. Apart from attending a Foresight Institute conference back in 1989, Joy shows no sign of having read any of the writings or listening to any of the talks of those who have devoted themselves to the issues he raises. Despite the clarity of Kurzweil’s writing, Joy still isn’t clear whether we are supposed to “become robots or fuse with robots or something like that”.
Someone in Joy’s influential position has a responsibility to delve into prior thinking on these issues before scaring a public already unreasonably (but selectively) afraid of advanced technologies, including one of his targets: genetic engineering. However, he fails to match the obvious gravity of his concern with adequate seriousness of research. He gives no credit to the years of work by the Foresight Institute, not only in promoting the idea of nanotechnology, but in developing technical solutions and policy measures to address its potential dangers. Certainly, Extropy Institute—a multi-disciplinary think tank and educational organization devoted to “Incubating Better Futures”—would have welcomed a chance to provide input to Joy before he released his missive to the masses.
Joy doesn’t stop at racing to judgment before doing adequate research. He seems to go out of his way to paint a distorted picture of those who disagree with his views as lacking both common sense and humility. I was disappointed to see him cite Carl Sagan, one of my intellectual inspirations, in the course of criticizing “leading advocates of the 21st-century technologies” as lacking in simple common sense, along with humility.
Balanced discussion of this difficult topic is not helped when one side makes accusations about common sense while advocating policies such as global relinquishment which practically all expert commentators recognize as hopelessly unrealistic. I can’t help being darkly amused by an interview with Joy in which he draws a parallel between his essay and Einstein’s 1939 letter to President Roosevelt.
What disturbs me most about Joy’s mischaracterizations is not the offense they cause, nor the hypocrisy that lies beneath them. It is that Joy’s approach increases the polarization of views. Rather than seriously engaging with those of us who have thought carefully about these matters, his grandstanding threatens to set us at odds. This kind of unproductive conflict would be expected from a consistently technophobic activist. From an accomplished technologist like Joy, we should expect better.
While acknowledging the tremendously beneficial possibilities of emerging technologies, Bill Joy judges them as being too dangerous for us to handle. The only acceptable course in his view is relinquishment. He wants everyone in the world “to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge”.
Joy joins the centuries-old procession of theocrats, autocrats, and technocrats in attacking our pursuit of unlimited knowledge. He mentions the myth of Pandora’s box. He might have thrown in the anti-humanistic and anti-transhumanistic myths of the Garden of Eden, the Tower of Babel, and the demise of Icarus. Moving from myth to reality, he should have been explicit in describing the necessary means deployed throughout history: burning books, proscribing the reading of dangerous ideas, state control of science.
Relinquishment Cannot Work
The first of my objections to relinquishment has already been well made by Ray Kurzweil. Joy’s fantasies about relinquishment ride on the assumption that “we could agree, as a species” to hold back from developing the GNR technologies and presumably any enabling or related technologies. Perhaps Joy’s experience in having a staff of engineers to do his bidding has blinded him to a fact too obvious to state without embarrassment: the six billion humans on this planet do not and will not agree to relinquish technologies that offer massive benefits as well as defensive and offensive military capabilities.
We have failed to prevent the spread of nuclear weapons technology, despite its terrifying nature and relative ease in detection. How are we to prevent all companies, all governments, all hidden groups in the world from working on these technologies? Mr. Joy, please note: all six billion of these people—many desperately in need of the material and medical benefits offered by these technologies—will not read the Dalai Lama and go along with your master plan. Relinquishment is a utopian fantasy worthy of the most blinkered hippies of the ‘60s. Adding coercive enforcement to the mix moves the idea from utopian fantasy to frightening dystopia.
Ray Kurzweil points to a fine-grained relinquishment that can at least reduce the dangers of runaway technologies among those willing to play this game. Nanotechnology pioneer Eric Drexler has long recommended designing nanomachines that will quickly cease functioning if not fed some essential and naturally uncommon ingredient. Ralph Merkle’s ‘broadcast architecture’ offers another way to develop nanomachines under control. These and other proposals can reduce the hazards of accidental nanotechnological disasters.
However, we can pursue intelligent design, ethical guidelines, and oversight only piecemeal, not universally. Less cautious or less benevolent developers will refuse even this fine-grained relinquishment. That fact makes it imperative to accelerate the development of advanced technologies in open societies. Only by possessing the most advanced technological knowledge can we hope to defend ourselves against the attacks and accidents from outside our sphere of influence. We should be pushing for better understanding of nanotech defenses, accelerated decoding and deactivation of genetically engineered pathogens, and putting more thought into means of limiting runaway independent superintelligent AI.
Stewart Brand, co-founder of the Whole Earth Catalog, recently showed that he understands this far better than Joy when wrote this in Technology Review: “The best way for doubters to control a questionable new technology is to embrace it, lest it remain wholly in the hands of enthusiasts who don’t see what’s questionable about it.”
I will not address genetic engineering since I regard this as an insignificant danger compared to those of nanotechnology and runaway artificial intelligence (AI). The dangers of runaway artificial superintelligence have received less attention than those of nanotechnology. Perhaps this is because the prospect of AI seems to move further away every time we take a step forward. Bill Joy cites only Hans Moravec on this issue, perhaps because Moravec’s view is the most frightening available (with the possible exception of Hugo De Garis). In Moravec’s view of the future, superintelligent machines, initially harnessed for human benefit, soon leave us behind. In the most pessimistic Terminator-like scenario, they might remove us from the scene as an annoyance.
Oddly, despite having read Kurzweil’s book, Joy never discusses Ray’s thoroughly different (and more plausible) scenario. In Ray’s future projections, we gradually augment ourselves with computer and robotic technology, becoming superhumanly intelligent. Moravec’s apartheid of human and machine is replaced with the integration of biology and technology.
While a little research would have shown Joy that futurists, especially transhumanist thinkers, have indeed addressed the danger of explosively evolving, unfriendly AI, I grant that we must continue to address this issue. Again, global relinquishment is not an option. Rather than a futile effort to prevent AI development, we should concentrate on warding off dangers within our circle of influence and developing preventative measures against rogue AIs.
Human beings are the dominant species on this planet. Joy wants to protect our dominance by blocking the development of smarter and more powerful beings. I find it odd that Joy, working at a company like Sun Microsystems, can think only of the old corporate strategy where dominant companies attempted to suppress disruptive innovations. Perhaps he should take a look at Cisco Systems, or Microsoft, both of which have adopted a different strategy: Embrace and extend. Humanity would do well to borrow from the new business strategists’ approach.
Realistically, we cannot prevent the rise of non-biological intelligence. We can embrace it and extend ourselves to incorporate it. The more quickly and continuously we absorb computational advances, the easier it will be and the less risk of a technology runway. Absorption and integration will include economic interweaving of these emerging technologies with our organizations as well as directly interfacing our biology with sensors, displays, computers, and other devices. This way we avoid an us-vs.-them situation. They become part of us.
Relinquishment is Ignoble
Some people reach moral conclusions by consulting an ultimate authority. Their authority gives them answers that are received and applied without questioning. For those of us who prefer a more rational approach to ethical thought, reaching a conclusion involves consulting our basic values then carefully deciding which of the available paths ahead will best reflect those values. Our factual beliefs about how the world works will therefore profoundly affect our moral reasoning.
Two individuals may share values but reach differing conclusions due to divergent factual beliefs. Referring to some person or practice as “unethical” obscures the interplay of factual and normative differences. That is why I say that “relinquishment is ignoble” rather than “relinquishment is unethical.” I suspect that my moral and philosophical disagreement with Joy over relinquishment results both from differing beliefs about the facts and differing basic values.
Joy assigns a high probability to the extinction of humanity if we do not relinquish certain emerging technologies. Joy’s implicit calculus reminds me of Pascal’s Wager. Finding no rational basis for accepting or rejecting belief in a God, Pascal claimed that belief was the best bet. Choosing not to believe had minimal benefits and the possibility of an infinitely high cost (eternal damnation). Choosing to believe carried small costs and offered potentially infinite rewards (eternity in Heaven). Now, the extinction of the human race is not as bad as eternity in Hell, but most of us would agree that it’s an utterly rotten result. If relinquishment can drastically reduce the odds of such a large loss, while costing us little, then relinquishment is the rational and moral choice. A clear, simple, easy answer. Alas, Joy, like Pascal, loads the dice to produce his desired result.
I view the chances of success for global relinquishment as practically zero. Worse, I believe that partial relinquishment will frighteningly increase the chances of disaster by disarming the responsible while leaving powerful abilities in the hands of those full of authoritarian ambition, resentment, and hatred. We may find a place for the fine-grained voluntary relinquishment of inherently dangerous means, where safer technological paths are available. But unilateral relinquishment means unilateral disarmament. I can only hope that Bill Joy never becomes a successful Neville Chamberlain of 21st century technologies. In place of relinquishment, we would do better to accelerate our development of these technologies, while focusing on developing protections against and responses to their destructive uses.
My assessment of the costs of relinquishment differ from Joy’s for another reason. Billions of people continue to suffer illness, damage, starvation, and all the plethora of woes humanity has had to endure through the ages. The emerging technologies of genetic engineering, molecular nanotechnology, and biological-technological interfaces offer solutions to these problems. Joy would stop progress in robotics, artificial intelligence, genetics, and related fields. Too bad for those now regaining hearing and sight thanks to implants. Too bad for the billions who will continue to die of numerous diseases that could be dispatched through genetic and nanotechnological solutions. I cannot reconcile the deliberate indulgence of continued suffering with any plausible moral perspective.
Like Joy, I too worry about the extinction of human beings. I see it happening every day, one by one. We call this serial extinction of humanity “aging and death”. Because aging and death have always been with us and have seemed inevitable, we often rationalize this serial extinction as natural and even desirable. We cry out against the sudden death of large numbers of humans. But, unless it touches someone close, we rarely concern ourselves with the constant drip, drip, drip of individual lives decaying and disintegrating into nothingness. Someday, not too far in the future, people will look back on our complacency and rationalizations with horror and disgust. They will wonder why people gathered in crowds to protest genetic modification of crops, yet never demonstrated in favor of accelerating anti-aging research. Holding back from developing the technologies targeted by Joy will not only shift power into the hands of the destroyers, it will mean an unforgivable lassitude and complicity in the face of entropy and death.
Joy’s concerns about technological dangers may seem responsible. But his unbalanced obsession with his fears and lack of emphasis on the enormous benefits, can only put a drag on progress. We are already seeing fear, ignorance, and various hidden agendas spurring resistance to genetic research and biotechnology. Of course we must take care in how we develop these technologies. But we must also recognize how they can tackle cancer, heart disease, birth defects, crippling accidents, Parkinson’s disease, schizophrenia, depression, chronic pain, aging and death, not to mention various environmental challenges including pollution and species extinction.
On the basis of Joy’s recent writing and speaking, I have to assume that we disagree not only about the facts, but also in our basic values. Joy seems to value safety, stability, and caution above all. I value relief of humanity’s historical ills, challenge, and the drive to transcend our existing limitations, whether biological, intellectual, emotional, or spiritual. Joy appears to be a philosophical cousin of those who wield the “precautionary principle” to block technological progress. I have proposed an alternative “Proactionary Principle” to pursue advances while responsibly searching for and mitigating unwanted side-effects.
Joy quotes the fragmented yet brilliant figure of Friedrich Nietzsche to support his call for an abandonment of the unfettered pursuit of knowledge. Nietzsche is telling the reader that our trust in science “cannot owe its origin to a calculus of utility; it must have originated in spite of the fact that the disutility and dangerousness of the ‘will to truth’, or ‘truth at any price’ is proved to it constantly.” Joy has understood Nietzsche so poorly that he thinks Nietzsche here is supporting his call for relinquishing the unchained quest for knowledge in favor of safety and comfort. Nietzsche was no friend to “utility”. He despised the English Utilitarian philosophers because they elevated pleasure (or happiness) to the position of ultimate value. Even a cursory reading of Nietzsche should make it obvious that he valued not comfort, ease, or certainty. Nietzsche liked the dangerousness of the will to truth. He liked that the search for knowledge endangered dogma and its comforts and delusions.
Nietzsche’s Zarathustra says: “The most cautious people ask today: ‘How may man still be preserved?’” He might have been talking of Bill Joy when he continues: “Zarathustra, however, asks as the sole and first one to do so: `How shall man be overcome?”… “Overcome for me these masters of the present, o my brothers - these petty people: they are the overman’s greatest danger!” If we interpret Nietzsche’s inchoate notion of the overman as the transhumans who will emerge from the integration of biology and the technologies feared by Joy, we can see with whom Nietzsche would likely side. I will limit myself to one more quotation from Nietzsche:
And life itself confided this secret to me: “Behold,” it said, “I am that which must always overcome itself. Indeed, you call it a will to procreate or a drive to an end, to something higher, farther, more manifold: but all this is one… Rather would I perish than forswear this; and verily, where there is perishing… there life sacrifices itself — for [more] power… Whatever I create and however much I live it — soon I must oppose it and my life; … ‘will to existence’: that will does not exist… not will to life but… will to power. There is much that life esteems more highly than life itself.
Zarathustra II 12 (K: 248)
Like Nietzsche, I find mere survival normatively and spiritually inadequate. Even if, contrary to my view, relinquishment improved our odds of survival, that would not make it the most noble or inspiring choice if we value the unfettered search for knowledge and intellectual, emotional, and spiritual progress. Does that mean doing nothing while technology surges ahead? No. We can minimize the dangers, ease the cultural transition, and accelerate the arrival of benefits in three ways:
We can develop a sophisticated philosophical perspective on the issues.
We can seek to use new technologies to enhance emotional and psychological health, freeing ourselves from the irrationalities and destructiveness built into the genes of our species.
And we can integrate those approaches using a sophisticated, balanced decision making procedure such as I have set out in the form of the Proactionary Principle.
We should be spurring research to understand emotions and the neural basis of feeling and motivation. I've seen some good work in this area (such as Joseph LeDoux's The Emotional Brain), but until very recently cognitive science has ignored emotions. If we are to flourish in the presence of incredible new technological abilities, we would do well to focus on using them to debug human nature. Power can corrupt, but knowledge that brings the power to self-modify so as to refine our psychology can ward off corruption and destruction. It is vital that we advance our abilities to refine our own emotions.
Improving philosophical understanding will speed the absorption and integration of new technologies. If we continue to approach rapid and profound technological change with philosophical worldviews rooted in old myths and pre-scientific story-making, we will needlessly fear change, miss out on potential advances, and be caught unprepared.
When the announcement came from Scotland proclaiming the first successful mammalian cloning, the Pope issued a statement opposing cloning on grounds that made no sense. (His vague objection would apply equally to identical twins.) President Clinton and other leaders also automatically moved to ban human cloning, with no indication of clear thinking based in science and philosophy.
Transhumanists at Extropy Institute and elsewhere have been developing philosophical thinking fitting to these powerful emerging technologies. In our books, essays, talks, and email forums, we have explored a vast range of emerging philosophical issues in depth. In August 1999, I chaired Extropy Institute’s fourth conference: Biotech Futures: Challenges and Choices of Life Extension and Genetic Engineering. The conference laid out the likely path of emerging technologies and dissected issues raised. In my own talk, I analyzed implicit philosophical mistakes that engender fear and resistance to the changes we anticipate. I summarized our own goals in a Letter to Mother Nature and have laid out some guiding values in The Extropian Principles. More recently (since the first version of this respond to Joy), I have developed a comprehensive, balanced decision procedure, set out in the Proactionary Principle.
Bill Joy’s essay and subsequent talks may feed the public’s fear and misunderstanding of our potential future. On the other hand, perhaps his thoughts will raise interest in the philosophical, normative, and policy issues in a productive way. As a strategic philosopher committed to incubating better futures I, along with my colleagues in Extropy Institute, welcome constructive input from Joy in this continuing learning process. Humanity is on the edge of a grand evolutionary leap. Let’s not pull back from the edge, but by all means let’s check our flight equipment as we prepare for takeoff.