Introduction to Rationality and Risk
About the title: This piece is not an introduction to rationality and risk; it is an introduction to the section of my book with that title. As I mentioned in my first post to this blog, I’m working on a collection of my essays written over the last 30+ years. For each of the (currently) 48 essays, I’m writing an introduction to put the essay into the context of when it was written and how it relates to today’s world. Each of the seven sections also has an introduction. This is the introduction to section three.
The introduction allowed me to connect pieces as different as “The Proactionary Principle,” “Dynamic Optimism,” and “In Praise of the Devil.” It also gives me an opportunity to respond to some off-target recent attacks on rationalism. This includes those that assume rationalism requires foundations of certainty and those that identify rationalism with the rigid and often biased application of rules of reasoning.
At a time when so many people act as though their feelings are reliable guides to reality, rationality needs to be reasserted. This is an age where people think you can pass a law or a regulation and it will reliably accomplish what they want despite all the history and economics to the contrary. This is an age where people believe that biological reality can be altered or overcome by adopting a label. This is an age where people believe that X must be true because people they dislike believe not-X. This is an age where people bow to authorities while ignoring their lack of clothes.
Defending and championing rationalism does not automatically make you rational. But explicit respect for reason is a good start.
-----------------------------------
Rationality is the guardian of reality and the enabler of progress.
Reason is a process, not a destination.
Reasoning usually concerns itself with finding arguments for continuing to believe what we already believe. Rationality concerns itself with challenging our beliefs and relentlessly rooting out those that fail to withstand the challenge. Rationality is not comfortable but it is freeing.
Rationalism is the belief that your life should be based on reason, evidence, and logic, rather than emotions or religious beliefs or authorities.
Our choice to be rational or not in any situation is almost always implicit and unconscious. We don’t think to ourselves, “Hmm, shall I choose A because it feels good or shall I apply reason to choose between A, B, and C?” If we have adopted rationality as a core part of our philosophy and have developed rationality as a virtue – a habitual and “natural” reflex – then will be far more likely to be rational in any situation. But each situation offers a new choice. In some areas of life, it will be easy to be rational. In others, we may struggle and may have to remind ourselves of our commitment to rationality. Perhaps you find it easy to be rational about investing but hard to be rational about intense relationships or political arguments.
Rationalism is a commitment to rationality. In essence rationality means apportioning the strength of one’s belief to the evidence and being open to revising one’s beliefs in light of new evidence. This gets complicated because we cannot go around constantly investigating every one of our beliefs to see if they need revising. At any one time we must take most of our beliefs as given and may have to rely on experts. But when something comes to your attention and matters to you, a rational person investigates by using what they have learned of critical thinking methods.
Whatever you think of Ayn Rand’s philosophy, she made a crucial point when she declared that rationality “means a commitment to reason, not in sporadic fits or on selected issues or in special emergencies, but as a permanent way of life.” I think she was also right to tie rationality to the virtues of independence, integrity, and honesty.
The importance of explicitly committing to reason and to continually striving to be rational make it as much a core part of transhumanism as it is to humanism. Like humanists, transhumanists reject faith as a source of knowledge. We reject the claim by religious believers that reason is just a form of faith and we reject the claim that faith is as valid as reason in determining what to believe. Perhaps more so than humanists, transhumanists also reject emotions as a source of knowledge and are sensitive to the tendency of emotions to bias thinking.
Transhumanists have also been critical of postmodernists and neo-Marxists who reject science and reason as tools of oppression and class warfare. Another form of fideism – faith as primary – can be seen in existentialists who take a leap of faith to hold a commitment. In today world, a much bigger problem than existentialists are those activists who push their emotions as if they were facts and outrage as if it were an argument, and those who place consensus above truth or who see it adds the very definition of truth.
No philosophy of life is satisfying or adequate unless it addresses epistemology. Non-rationalists need not be as honestly and openly and proudly irrational as Tertullian who wrote: “It is certain because it is impossible.” So important is rationality to extropian philosophy that I once tried to introduce a new dating system using as its zero year the publication of Francis Bacon’s Novum Organum. Instrumental in the development of the experimental scientific method. Francis Bacon is not to be confused with Roger Bacon, also known by the marvelous moniker of Doctor Mirabilis, who was a medieval English philosopher who championed the study of nature through empiricism[1].
The desire to enhance rationality in decision making explains why Extropians have been so interested in structured methods of thinking and forecasting more accurately. Some of the approaches considered early on – such as logical languages like Lojban – turned out not to be practical. One early area of interest has attracted more enthusiastic: idea futures markets or decision markets. Hewlett Packard used forecasting markets to improve on experts’ forecasts of printing supply needs, the Hollywood Stock Exchange allowed people to bet on the size of the opening take, and Metaculus allows people to bet on all kinds of claims about the future.
“Rationalism” contested. “Rationalism” has become an abused term, and sometimes a term of abuse. In its most general sense it is simply a commitment to being rational, to opening one’s beliefs to evidence and logic and being willing to change those beliefs. Many self-described rationalists have something more specific in mind. They conceive of reason as the rational building of beliefs on a foundation of certainty. They may further believe that the building process itself can yield certainty. This is the tradition associated with Rene Descartes, Gottfried Wilhelm Leibniz, and Baruch Spinoza. A crucial component of this form of rationalism was the view that knowledge can be based on pure reason while devaluing the role of experience.
Another crucial aspect of this kind of rationalism was the desire to base beliefs on indisputable foundations. True knowledge was certainty. The opposing philosophers who are known as “empiricists” instead emphasized the crucial role of knowledge. But they had the same desire to found knowledge on foundations of certainty. Instead of “clear and distinct ideas” the empiricists of the 17th and 18th centuries and sometimes later were more likely to appeal to supposedly incorrigible sense data. Sensory experience was to provide the fundamentals of knowledge, being the undeniable building blocks of belief. In both cases, we can see a drive for knowledge-as-certainty.
Rationalists searching for certainty failed in their quest. So strong is the need for certainty, that many thinkers refused to acknowledge this failure. You can see the dire effects of need among some rationalists today. A classic example is the rationalist philosophy of Ayn Rand. As I explain further in the pancritical rationalism essay, Rand’s epistemology was founded on a few supposedly self-evident and irrefutable axioms. Any attempt to deny these axioms was self-refuting because all arguments assume the axioms. The axiomatic nature of Rand’s epistemology combined with her view that the truth is manifest led to a dogmatic philosophy who detractors were obviously consciously irrational and evil!
Certainty-seeking forms of rationalism can also be seen in dogmatic atheism and dogmatic forms of skepticism. These can go beyond a reasonable statement that “extraordinary beliefs require extraordinary evidence” into a dismissal of any evidence that conflicts with the skeptics’ views. By “dogmatic atheism” I do not mean someone who simply has no belief in a god. I mean someone who insists that there cannot be a god of any kind. It may be perfectly rational to believe that we have no good reasons to believe in gods but its far more dubious to insist that we can be 100% sure there are none. It might be a little easier for self-described rationalists to see this if they replace “god” with “simulator of our universe”. Can you really prove with certainty that we are not living in a simulation?
Perhaps it’s unfair – and it will certainly be controversial – but I see utilitarianism as embodying this form of rationalism in that it promises right answers using a formula to determine the right action. It is true that there is nothing inherent in utilitarianism that promises that you will get the uniquely morally right answer. However, observation of the Effective Altruism community suggests that many believe we can show that we must take certain actions. That might mean giving all the money you can possibly spare to AI safety research. That takes us into the realm of “longtermism”, which is too far afield for me to comment on here.
Consistently being rational is hard. Being a rationalist explicitly helps to remind you of the need to keep working at it. Advocating rationality doesn’t make you rational. Rationalists are perhaps more likely to be rational than if they were not rationalists but they are still human. In-group behaviors can make even rationalist communities not fully rational. Critics in recent years have accused parts of the LessWrong rationalist community of this kind of in-group, self-reinforcing thinking, boosted further by persuasive personalities.
Rationality is not simply following rules: “Rationalism” is misleadingly used by some to mean “using rules or principles to guide reasoning”. Rationalism is more fundamental than that. And using rules or principles can be abused if those rules are not well-grounded, open to refutation, or sensitive to the context in which they are applied. Intelligently used with context sensitivity, rules and principles are crucial to good reasoning. One example is the rules of logic. Problems arise when people start with a fixed view of the issue and apply existing rules inappropriately. The rules themselves may be good when used on the appropriate problems and with awareness of the context.
Most self-described rationalists learn about cognitive biases and logical fallacies. Knowledge of these endemic glitches in human reasoning can help us to avoid them. It seems simple: Understand cognitive biases and fallacies; watch out for them; avoid them. Unfortunately, the path from understanding to exemplifying rationality is a winding and tortuous one. Rationalization is often more appealing and comfortable than rationality. The more intelligent a person is and the more they understand biases and fallacies, the more they may use these to fool themselves. They may be better at fooling themselves than someone less smart and less informed about biases. This isn’t merely plausible speculation. Some research has looked into this and come to discouraging conclusions.[2]
Research suggests that teaching people about misinformation often just causes them to dismiss facts they don’t like as misinformation, while teaching them logic often results in them applying that logic selectively to justify whatever they want to believe. A commitment to reason is not enough. We need a healthy dose of curiosity to open us to the truth. We also need plenty of humility. Biases are driven by our desires and our ego. Especially for highly intelligent people, our self-worth can be built on being right. Admitting error hurts our self-image. We protect our self-image by staying in error. If you can take what I see as a more extropic view and base your self-worth and self-image on willingness to learn and ability to self-correct, you’re far more likely to use the rules of reasoning to enlighten rather than obscure the truth.
This is why I don’t automatically give more credence to arguments and especially conclusions given by self-described rationalists. I read material by such people daily and one thing I have observed too frequently is a high degree of confidence, certainty, and sometimes clear arrogance. This attitudes suggests that curiosity is taking a back seat to being “right” and humility is in short supply.
My path to pancritical rationalism: Before I started to think about thinking, I explored a range of religious, mystical, and supernatural beliefs, trying them on like clothes. My two older (half-) brothers had left behind books on the occult which fascinated me around the age of 10-12. I devoured books by the marvelously fake Tibetan monk, Lobsang Rampa. I tried astral projection and dowsing. At 12 years of age, after a talk by my Latin teacher, I was inducted into Transcendental Meditation. I like the idea of the “sidhis” – special powers achieved by advanced meditators – but thought that the levitating meditators looked like they were bouncing off cushions and being photographed just before falling back. Anyway, I was too impatient to be a good meditator. I soon moved on.
I joined the San Jose, California-based Rosicrucians (AMORC). I studied a few of their lessons, full of Cartesian dualism and then moved on again. I joined the International Order of Kabbalists and wrote up my thoughts on the Kabbalah and sent them to London to be reviewed. I read Aleister Crowley and ritual magic. It was fun but each system, each practice yielded nothing.
I also delved into more Western-traditional religious belief systems. I had a brief Christian phase which I dropped because the idea of Hell and eternal punishment seemed clearly incompatible with a loving God. How about reincarnation? Better, because you get multiple chances to achieve salvation or samadhi or to exit the wheel of suffering. Any sort of religious or supernatural belief was breaking down in me around the age of 14 and I had definitely become a non-believer at 15. One of my most vivid teen memories is of an argument with my brother, Martin, who told me I was going to Hell because I didn’t accept Jesus Christ as my savior.[3]
As I explored this territory, I became increasingly pulled into the conflict between reason and faith, rationality and irrationality. An important part of development in my early to mid-teens was a headlong dive into books skeptically analyzing supposed psychic phenomena. This pushed me into thinking about thinking. From 1981 to 1982, my attention shifted from reading critical books about religion and the supernatural to economics, philosophy, economic history, and the future. Economics and philosophy added new tools for thinking critically about the world and how it worked. Nozick, Rothbard, Friedman (both Milton and David), and Rand blew up my view of the role of government.
Philosophy informed my skepticism and my desire to question all assumptions. Books like Godel, Escher, Bach and The Mind’s I kindled my critical and creative thinking about the mind and how it works. My journey to critical rationalism began in 1982 when I read Karl Popper and other critical rationalists, including Popper’s Conjectures and Refutations read in the Summer of 1982, followed by The Open Society and Its Enemies in 1983, and The Logic of Scientific Discovery in September 1985. In the late 1980s and throughout the 1990s, I studied cognitive psychology – including the pioneering work of Nisbett and Ross, Kahneman, and others.
In the late 1990s and through the 2000s I read a tremendous number of books and articles on decision making and strategy in organizations as well as scenario planning and other approaches to thinking systematically about the future[4]. I also taught logic and critical thinking and philosophy of religion in the 1990s.
A mix of essays: This section of the collection contains essays of very different styles and purposes. In reverse order, the oldest piece is the militant 1989 piece “In Praise of the Devil” which waves the banner of reason against the forces of irrationalism. At the time I wrote it, that primarily meant religion. Today, it will very much include political irrationalism, much of the environmentalist movement, and more.
There’s another early piece, “Dynamic Optimism”, that considers the connection between rationality and a healthy form of optimism. It is also the most practically oriented extropian principle.[5] In the middle is an exploration of pancritical rationalism. The first two pieces are more recent and were motivated by a desire to create an alternative to the precautionary principle and similar fear-based thinking and policy making. It’s in these pieces that the ideal of rationality is distilled and applied to the difficult area of risk and uncertainty.
This introduction is quite long enough as it is, so I’ll shut up and leave further thoughts to the introductions for each essay.
[Some of those introductions will appear on this blog.]
In this section:
The Perils of Precaution
From ProP book (chapter 2), 2010
The Proactionary Principle
2009 version from unpublished book (chapter 4)
Proactionary Principle Simplified
Unpublished, 2009
Pancritical Rationalism: An Extropic Metacontext for Memetic Progress
From the Extro-1 Conference, Extropy Institute, 1994
Dynamic Optimism
Dynamic Optimism: Epistemological Psychology for Extropians. in Extropy #8 (Vol.3, No.2), Winter 1991/92 (7675).
In Praise of the Devil
with Postscript
In Extropy #4, July 1989. Reprinted in Atheist Notes of the Libertarian Alliance, UK, January 1991.
[1] Bacon was a contemporary of René Descartes. Both questioned the philosophical authority of the ancient Greeks and both found it important to critique of previous natural philosophy. It’s a crime that Descartes is always taught in introductory philosophy classes but Bacon is ignored. (A more empiricist version of rationalism is usually introduced with John Locke.) Descartes’ form of rationalism was deeply problematic, as I argue in the essay on pancritical rationalism.
[2] Čavojová et. al. 2018; Vedejová & Čavojová (2022); Bueno, Schiff, & Schiff (2020).
[3] Many years later, he was horrified when I reminded him of this and repudiated his overly-fervent condemnation. Despite remaining something like a fundamentalist Christian, he became quite tolerant of and interested in my transhumanist views.
[4] This was a core part of my job during those years, working for the consulting firm ManyWorlds, founded by strategic planning and organizational process design expert Steve Flinn. Those included Kahneman, Slovic, and Tversky 1982, Bazerman 2008, Bazerman and Watkins 2005, Bazerman and Neale 1994, Taleb 2007 & 2008, Gigerenzer 2004, Armstrong 2001, Mintzberg et. al. 2000, Brandenburger 2005, and Barry J. Nalebuff, 2011, Courtney 2001, Wells 2000, Fahey and Randall 1997, Brown and Deguid 2000, Dixit and Nalebuff (1991), Cialdini 1993, Klein 1999, Goklany 2001, Jones 1998, Hoch, Kunreuther, and Gunther 2004, Glassner 1999, Linstone and Turoff 1975.
[5] There’s also a very different version, unpublished so far, written specifically to be more able to be put into practice. I may include that in a follow-up volume or on my blog.