I am publishing a full version of the Proactionary Principle here since I will be referring to it in a major post on AI risk very soon. It will be easier to link to from that essay.
This version comes from chapter 4 from an uncompleted book on the Proactionary Principle, written in 2009-2010.
This version differs from the version in The Transhumanist Reader, mostly in that the book version includes background material on problems with the rival precautionary principle. I will be posting separate essays on that, so exclude it here. The version is my forthcoming book collection is the same except for part of the introduction.
For brevity, I sometimes abbreviate “The Proactionary Principle” to “The ProP”. This abbreviation is apt since the principle is a kind of in depth prop or support for decision making.
In the original book, the chapter was preceded by one critiquing the precautionary principle and another (“The Wisdom of Structure”) explaining how to shape decision making procedures. Despite not having published a book on the Proactionary Principle, it has become moderately well known if not yet much implemented.
I have not updated the discussion of topics such as vaccines, despite the Covid pandemic. The older examples make the necessary points. I would have changed the emphasis to note the problems with authorities pressuring people to give unapproved vaccines to groups at very low risk. The official response to Covid in the USA (and most other countries) violated the Proactionary Principle’s prescriptions to make response proportionate, to be objective and comprehensive, and to embrace diverse input.
Sometime soon, I will also publish a simplified version of the principle. The Proactionary Principle can also be simplified in other ways to make it easier to grasp and to apply. One such translation was done by Kevin Kelly.[1] Kevin provides a succinct list of “Pro-Actions”, a term which I have adopted. These are especially helpful for simpler, quicker, personal decisions.
Not discussed here is the major challenge of getting organizations to actually use sound methods of decision making and forecasting. They have their own incentives and those may lead them to ignore or resist sound approaches. This essay provides principles for intelligent decision making but does not look into how to incentivize organizations to adopt them. That would be an entirely distinct essay.
What the Principle is not: The principle is not an algorithm into which you can feed a few variables and expect it to kick out the correct and optimal answer. A common kind of pseudo-rationalist mindset yearns for such algorithmic, unerring procedures and wants to believe they exist even when they do not. Rather, the ProP is an orientation toward progress by default and a set of guidelines, each of which has a tremendous amount of content that must be applied thoughtfully.
The Proactionary Principle
Introduction
For centuries, most of humanity regarded the shattering power of lightning to be a supernatural force—the manifestation of an angry God or a hostile demonic force. At the approach of a storm, church bells were rung to ward off the bolts. According to the great theologian St. Thomas Aquinas, “The tones of the consecrated metal repel the demon and avert storm and lightning.”
These precautionary measures rooted in fear and faith were even less effective than you might think. During the middle third of the eighteenth century in Germany alone, Lucifer threw his deadly bolts of light and sound at 386 churches, killing over a hundred bell ringers.[2] Another three thousand people perished when lightning struck a church in Venice where tons of gunpowder were stored. The great American inventor, entrepreneur, and statesman Benjamin Franklin observed that “The lightning seems to strike steeples of choice and at the very time the bells are ringing; yet still they continue to bless the new bells and jangle the old ones whenever it thunders. One would think it was now time to try some other trick.”[3]
Franklin’s own solution, the lightning rod, worked far better. That wasn’t enough to win the support of religious authorities. One French high priest declared that the lightning rod was an offense to God. Taken aback, Franklin said, “He speaks as if he thought it presumption in man to propose guarding himself against the thunders of Heaven!”[4]
If I could, I’d like to go back and reassure Franklin. “Look, old chap,” I’d say. “Certainly these Churchmen are keeping lightning’s toll higher than it need be. But just be glad that they’re not calling for a massive, tax-financed program of building churches and training bell ringers.”
In the 21st century, we have the misfortune to be surrounded by today’s equivalent of the church bell expansionists. Consider the case of global warming—or, rather, the way in which this issue is typically discussed. Starting off from a base of observations, an enormous leap is made to a set of future scenarios. Then another giant leap is made to policy recommendations.
I don’t doubt that human beings are increasing atmospheric concentrations of carbon dioxide. Nor do I much doubt that this will have an effect on temperature. We have detected a temperature increase of 0.7° C over the past century, some uncertain part of which we might reasonably attribute to an anthropogenic greenhouse effect. Beyond these modest observations lies a world of uncertainties and unknowns. Some parties, convinced of their superior vision, propose measures such as mandating reductions in CO2 emissions with a certainty that is disturbingly authoritative—even dismissive.
Some of these uncertainties lurk within the models, such as difficulties in adequately representing aerosols, clouds, and water vapor feedback. The remaining unknowns mean we can’t tell whether even a doubling of CO2 concentrations will lead to a minor or a major increase in global temperatures. (A second doubling of CO2 may have much less effect than the first.) We also run into difficulties when attempting to forecast the level of future CO2 emissions. These appear to depend most heavily on how rapidly energy efficiency will increase ,on the degree to which renewables will get cheaper than fossil fuels, and on the path taken by rapidly developing countries.
If we look at the cost and impact of global warming, we find additional uncertainty because of its dependence on numerous variables. Contrary to popular reports, we probably need not worry about warming’s effect on storms, the frequency of hurricanes, or malaria. Still, its costs will be substantial, probably coming to several trillion dollars[5] over the years.[6] Despite the many uncertainties then, shouldn’t we all stand behind the Kyoto agreement and get busy cutting CO2 emissions? The answer to that question is much less uncertain: No!
For one thing, even a full-scale implementation of Kyoto will have an almost insignificant effect on the climate: leading to a difference of something like 0.15° C a century from now. As Bjorn Lømborg has noted, this is equivalent to delaying the temperature increase for six years.[7] That wouldn’t be so bad if it weren’t for the cost of those measures. Lømborg has calculated that, for the United States alone, the cost of the Kyoto pact “will be higher than the cost of providing the entire world with clean drinking water and sanitation. The latter would avoid 2 million deaths every year and prevent half a billion people becoming seriously ill each year.”
If implemented without a trading mechanism for emissions, the cost of Kyoto could rise to $1 trillion—that’s almost five times the cost of world-wide water and sanitation coverage. Returning emissions to the global 1990 level would raise the cost to $4 trillion. A limit on temperature increase could cost anywhere between $3 trillion and $33 trillion.
We know that global warming may be expensive but also that cutting CO2 will be expensive. I will not attempt to determine which expense will be greater. I want only to point out that the discussions surrounding Kyoto and its like ignore alternatives. One alternative to drastic reductions in CO2 emissions is to pay the costs of adaptation to the higher temperatures. Economic analysis convincingly shows that adaptation would be far cheaper.
The global warming debate illustrates another distressing feature of so many of these discussions: We hear about the bad effects that would result from CO2 emissions, but little or nothing about the bad effects of excessive or clumsy regulation of those emissions.[8]
The need for the Proactionary Principle becomes starkly clear when we survey the shortcomings in the global warming discussion. We see poor decision making driven by politics, false certainty, and failure to consider alternatives. The process that leads to policy is almost untouched by objective methods. It fails to consider important alternatives and some have even attempted to suppress dissent, turning the “official position” into something akin to a religious orthodoxy. Proposed measures are ineffective or disproportionate and excessively costly in relation to other options.
True, it is a complex issue. Perhaps some guidelines would help.
The Proactionary Principle
The Proactionary Principle emerged out of a critical discussion of the precautionary principle during Extropy Institute’s Vital Progress Summit in 2004. We saw that the precautionary principle is riddled with fatal weaknesses. Not least among these is its strong bias toward the status quo and against the technological progress so vital to the continued survival and well-being of humanity.
Participants in the VP Summit understood that we need to develop and deploy new technologies to feed billions more people over the coming decades, to counter natural threats—from pathogens to environmental changes, and to alleviate human suffering from disease, damage, and the ravages of aging. We recognized the need to formulate an alternative, more sophisticated principle incorporating more extensive and accurate assessment of options while protecting our fundamental responsibility and liberty to experiment and innovate.
With input from some of those at the Summit, I developed the Proactionary Principle to embody the wisdom of structured decision making. The Principle urges all parties to actively account for all the consequences of an activity—good as well as bad—while apportioning precautionary measures to the real threats we face. And to do all this while appreciating the crucial role played by technological innovation and humanity’s evolving ability to adapt to and remedy undesirable side-effects.
The exact wording of the Principle matters less than the ideas it embodies. The Principle is an inclusive, structured process for maximizing technological progress for human benefit while heightening awareness of potential side-effects and risks. In its briefest form, it says:
Progress should not bow to fear but should proceed with eyes wide open.
More flatly stated:
Protect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.
Expanded to make room for some specifics:
Encourage innovation that is bold and proactive; manage innovation for maximum human benefit; think about innovation comprehensively, objectively, and with balance.
We can call this “the” Proactionary Principle so long as we realize that the underlying Principle is less like a sound bite than a set of nested Chinese boxes or Russian babushka dolls. If we pry open the lid of this introductory-level version of the Principle, we will discover five component principles lying within:
Preamble
The freedom to innovate technologically and to engage in new forms of productive activity is valuable to humanity and essential to our future. The burden of proof therefore belongs to those who propose measures to restrict new technologies. At the same time, technology can be managed more or less wisely. Five principles (or “Pro-Actions”[9]) can help promote a rational, balanced approach:
Be Objective and Comprehensive
Prioritize Natural and Human Risks
Embrace Diverse Input
Make Response and Restitution Proportionate
Revisit and Revise
Be Objective and Comprehensive
Big, complex decisions deserve to be tackled using a process that is objective, structured, comprehensive, and explicit. This means evaluating risks and generating alternatives and forecasts according to available science, not emotionally shaped perceptions, using the most well validated and effective methods available. Rather than reflexively huddling in committees, decision makers should use the rich and growing body of knowledge about evidence-based methods for generating options, forecasting, and deciding.[10] Objectivity can be improved by consistently using, for example, the devil’s advocate procedure and by using auditing procedures such as review panels. Different kinds of decisions and forecasts require different tools of anticipation and decision. Choosing from among a wide range of techniques will product better results.
The freedom to innovate technologically and to engage in new forms of productive activity is valuable to humanity and essential to our future. The burden of proof therefore belongs to those who propose measures to restrict new technologies.
Wise decisions will not emerge if options are limited to the obvious or politically popular. Consider all reasonable alternative actions, including no action. Estimate the opportunities lost by abandoning a technology and account for the costs and risks of substituting other credible options. When making these estimates, use systems thinking to carefully consider not only concentrated and immediate effects, but also widely distributed and follow-on effects, as well as the interaction of the factor under consideration with other factors. The greater the uncertainty and the less stable the situation, the less justification there is for major policy changes.
Prioritize Natural and Human Risks
Avoiding all risks is not possible. They must be assessed and compared. The fact that a risk or threat is “natural” should not give it any special status. Treat technological risks in the same way as natural risks. Avoid underweighting natural risks and overweighting human-technological risks. Inaction can bring harm as well as action. Actions to reduce risks always incur costs and come at the expense of tackling other risks. Therefore, give priority to:
reducing immediate threats over remote threats
addressing known and proven threats to human health and environmental quality over hypothetical risks
more certain over less certain threats
irreversible or persistent impacts over transient impacts
proposals that are more likely to be accomplished with the available resources; and to measures with the greatest payoff for resources invested
Embrace Diverse Input
Account for the interests of all potentially affected parties and keep the process open to input from those parties or their legitimate representatives. Recognize and respect the diversity of values among people, as well as the different weights they place on shared values. Whenever feasible, enable people to make reasonable, informed tradeoffs according to their own values. Rather than banning a technology or technological product for everyone, provide information and appropriate warnings. Besides, prohibition rarely works. When it does, it abolishes the benefits of technologies. Limited experiments may be better than universal prohibition. Technologies that cause harm can often be put to different uses or applied in new, safer ways. (A drug that causes birth defects may be tremendously beneficial in people who aren’t pregnant women. A pesticide may be just as beneficial and far less harmful when applied more precisely.)
Make Response and Restitution Proportionate
Consider restrictive protective measures only if the potential negative impact of an activity has both significant probability and severity. In such cases, if the activity also generates benefits, discount the impacts according to the feasibility of adapting to the adverse effects. If measures to limit technologies do appear justified, ensure that the extent of those measures is proportionate to the extent of the probable effects, and that the measures are applied as narrowly as possible while being effective. When harm has already occurred, the costs of those harms should be internalized as much as reasonably possible, such as by holding liable the producer of the technology (or the user, if they are responsible). Those responsible for harm should make restitution swiftly.
Revisit and Revise
We only learn from our decisions if we return to them later and check them against actual outcomes. When checking on our original decisions and the reasoning behind them, it’s not good enough to rely on memory. We too readily revise our memories to fit later events. To ensure that decisions are revisited and revised as necessary, decision makers should create a trigger to remind them. It should be set far enough in the future that conditions may have changed significantly, but soon enough to take effective and affordable corrective action.
Getting in the habit of tracking assumptions, forecasts, and decisions and comparing them to actual outcomes enables an organization to learn from its mistakes. In some cases, this kind of assessment can be done continuously, improving the gains made in “learning by doing”. In the case of new technologies and technological products—even when they have been thoroughly tested initially—organizations should continue to track them, especially when undesirable direct side-effects are likely (as in the case of drugs and complex software systems). Tracking technologies increasingly can help us regard daily use of technologies as continuing large scale experiments.
Freedom to Innovate
At the center of the Proactionary Principle we find a commitment to scientific inquiry and discovery, technological innovation, and the application of science and technology to the improvement of the human condition. In a time where so many indulge in postmodern pouting, the Principle champions the vigorous use of our uniquely human capabilities to improve ourselves and the world—to progress rather than regress, to advance extropy rather than to bow to entropy.
In embodying the wisdom of structure, the Principle guides decision makers to look at many options and to consider the range of people likely to be affected. It helps decision makers to take a balanced view of opportunities and risks. But let’s not confuse this with being middle of the road.
The Proactionary Principle notes that “The freedom to innovate technologically and to engage in new forms of productive activity is valuable to humanity and essential to our future. The burden of proof belongs to those who propose measures to restrict new technologies.” All proposed measures should be closely scrutinized. Rather than moving forward hesitantly, this means boldly stepping ahead while being mindful of where we put our feet.
I cannot help feeling bewildered that this imperative—Advance! Progress! Improve!—is not widely accepted as uncontroversial. Even philosophical and social thinkers as radically opposed as Karl Marx and Robert Nozick could agree on it. Yet today we have the curious spectacle of “progressive” people adamantly opposing new technologies—unless those technologies can prove themselves sufficiently “natural”.
What is the primary driving force behind most environmentalist opposition to technologies from nuclear power to genetically modified crops? It’s not a sober and balanced assessment of pros and cons. It’s ideology. Consider that water fluoridation originated in the government; GM crops originated in corporations. As Stewart Brand observed[11], that’s why it was the political “right” that opposed water fluoridation and the “left” that opposed “frankenfood’. Brand notes, by contrast, that GM crops have been “enthusiastically adopted” by the Amish, “the most technologically suspicious group in America (and the best farmers)”.
Given the dark scenarios of global warming pictured by environmentalists, you might expect them to shift their support massively in favor of nuclear power. At the very least to give it serious reconsideration. With rare and marvelous exceptions (such as James Lovelock) this hasn’t happened. Brand mentions the romanticist strand in environmentalism that hates “to admit mistakes or change direction.” Compare the support from scientists for nuclear power in 1982—just a year and a half after the Three Mile Island accident. As Paul Lorenzini tells us in an article in Issues in Science and Technology[12], “Nearly 90 percent of the scientists surveyed believed nuclear power should proceed, with 53 percent saying it should proceed rapidly.”
Lorenzini goes on to examine the relentless, ideological opposition to nuclear power, and to technology in general. He suggests that the strident opposition to nuclear power—despite the resulting use of more coal with its far more serious effects on human health and the environment—is due to nuclear power’s symbolizing “the new world of technological advancement.”
Environmentalism comes in many flavors. Some self-described environmentalists look at each issue on its merits and harbor no antipathy toward humanity or progress. That stance was more common within the preservationist form of the movement seen until the 1960s. In that decade, environmentalism soaked up the toxic messages of writers such as Jacques Ellul, Herbert Marcuse, Paul Ehrlich, and Barry Commoner. As these pessimistic, anti-humanist notions spread through its branches, contemporary environmentalism grew and spread the poisonous assumptions that technological societies are dehumanizing and that we are hastening ourselves into an impending environmental apocalypse.
In turning its back on Enlightenment ideals of progress, implemented in large part through technology, the core of the green movement has belittled the contributions of science and technology to our lifespan, public health, prosperity, and well-being. Green ideologies have become major barriers to creating and developing the technologies that can take us farther from the darkness, ignorance, and suffering of the Early Middle Ages.
Like Bill Joy, the greens would have us relinquish new and emerging technologies such as genetic engineering, molecular nanotechnology, artificial intelligence, and biological-technological interfaces. Yet billions of people continue to suffer illness, damage, starvation, and all the plethora of woes humanity has had to endure through the ages. These emerging technologies offer solutions to these problems. The forces of prohibition and relinquishment are saying, in effect: Too bad for those now regaining hearing and sight thanks to implants. Too bad for the billions who will continue to die of numerous diseases that could be dispatched through genetic and nanotechnological solutions.
Holding back from developing the technologies opposed by these ideologues will mean an unforgivable lassitude and complicity in the face of entropy and death.
Someday, not too far in the future, people will look back in horror, wondering why people gathered in crowds to protest genetic modification of crops, yet never demonstrated in favor of accelerating anti-aging research. Holding back from developing the technologies opposed by these ideologues will not only shift power into the hands of the irresponsible and the malicious, it will mean an unforgivable lassitude and complicity in the face of entropy and death.
Of course we must take care in how we develop these technologies. But we must also recognize how they can tackle cancer, heart disease, birth defects, crippling accidents, Parkinson’s disease, schizophrenia, depression, chronic pain, aging and death, not to mention various environmental challenges including pollution and species extinction.
Some people are opposed to innovation and progress on principle, directed by values ranging from a longing for the pastoral to the anti-humanist imperatives found in the rotten core of the deep greens. Others oppose progress because progress means change and they fear change—or certain types of change. Clearly the Web evokes less fear and opposition than genetic engineering. Fear has less to do with real threats than with the ready availability of scary images and scenarios and the absence or weakness of positive, healthy, encouraging images and scenarios. (See how many dystopic, threatening science fiction movies you can list. Then try the same for predominantly optimistic visions.)
Mythic and popular narrative showed me a glimpse of its power when I was a graduate student in philosophy. In talking with a member of the department the subject of cryonic suspension came up: the practice of deep-freezing people immediately following the declaration of legal death. This professor was a razor-sharp, technically trained philosopher with a good grasp of neuroscience and no religious convictions. Just as I would have expected, she had no philosophical objections to cryonics. Nor, she acknowledged, had she any scientifically grounded objections. She disliked cryonics because she found the practice…ghastly. (It’s to her credit that she didn’t attempt to rationalize this response.)
Humanism and its heir, transhumanism, embodies a commitment to progress and the amelioration of human ills and woes and to our universal betterment. One great virtue of these philosophies of life is that they push in a direction opposite to that of our natural resistance to change. Some humanists gave in to the ever-present temptation to turn this philosophy into something akin to a religion. The nineteenth century philosopher Auguste Comte and his doctrine of positivism stand as a glaring example. Trouble comes with the shift from a focus on questions to answers, uncertainty to certainty, science to scientism.
Scientism is the imitation in the social science of the methods of the physical sciences without regard for the innate differences between them. It also takes the form of equating scientific progress with social progress. This becomes especially troubling when social thinkers convince themselves that they have discovered an iron law that guarantees progress. In his 1952 book, The Counter-Revolution of Science, Friedrich Hayek referred to this practice as “the abuse of reason.”
In vigorously defending and advocating the ideal of progress then, I do not want to be mistaken as claiming that scientific approaches guarantee progress. At the very least such a view is badly incomplete. In practice it is often dangerous. The early business guru Frederick Winslow Taylor saw his “scientific management” as a tremendous breakthrough in its attempts to apply time-and-motions studies to human activity. Valuable when restricted to the appropriate objects, “Taylorism” was widely abused, nowhere worse than by Stalin and the Stakhanovite movement in the Soviet Union. Even in the United States personnel have sometimes suffered from the over-application of Six Sigma and similar methods for optimizing productive activity.[13]
Progress is far from automatic. It comes only when we expend resources—when we feed the process with effort, foresight, and determination. To borrow a phrase from the late economist Julian Simon, the ultimate resource is human ingenuity. But ingenuity cannot sustain itself in isolation. The flame of ingenuity will gutter out if not applied to the real world of activity. Ingenuity can be applied only when we protect the freedom to innovate. Nor can the ingenious mind play out and develop an idea unless it can experiment with it. Freedom to innovate is crucial for fully developing and realizing ingenuity—for allowing the ultimate resource to power progress.
Objectivity
“Dispassionate objectivity is itself a passion, for the real and for the truth.” —Abraham Maslow
“Cloquet hated reality but realized it was still the only place to get a good steak.” — Woody Allen, “The Condemned”, The New Yorker, November 21, 1977
Building on the foundation of the freedom to innovate, the Proactionary Principle next urges decision makers to Be Objective and Comprehensive. We should use an objective, structured, and explicit decision process because of how easily we all fall into emotionally colored and situationally biased judgments.
Consider all the fuss about the offshore outsourcing of jobs. The availability of news stories, the vividness of first-hand accounts of job losses, and a readiness to blame big corporations makes it easy for many people to attribute a large part—even a majority—of job losses to offshoring. However, more than a million people in the US change their jobs every month while only a few hundred thousand jobs went offshore over the last several years. According to a 2004 report from the US Bureau of Labor Statistics, offshoring of service work accounted for only 1% of US job losses in the first quarter of 2004. Even including manufacturing jobs moved overseas, offshoring accounted for just 4% of all layoffs.
Subjective and biased perceptions are especially seductive in the case of risk assessment. Surveys repeatedly show that women fear cancer, especially breast cancer, far more often than heart disease. That’s partly due to the heavy publicizing of statistics such as “1 in 8 women will get breast cancer.” While true, the lack of context leads people to form a distorted impression of relative risks. The 1 in 8 figure is a lifetime risk (in the USA), not the risk for every woman at every age. Further, the lifetime risk of death from breast cancer is considerably lower, at 3.29%. Compare this to a lifetime risk of death from lung and bronchus cancer at 4.74%, and from coronary heart disease at a much higher 31.9%.
On average, we should worry about dying in a motor vehicle accident four or five times as much as we worry about homicide. Media reports only worsen our intuitive sense of relative risk. Some hard numbers on this phenomenon have been provided by researchers who compared how often the newspapers mentioned various causes of death to actual mortality figures.[14] Although death from ordinary diseases kills a thousand times more people than murder, you’ll be told about murders three times as often. Given their relative death rates, plane crashes are mentioned nearly 12,000 times too often (or smoking is mentioned 12,000 times too infrequently).
As I write [2009], vaccines are being prepared for the H1N1 virus (swine flu). You can practically guarantee that, after being vaccinated, some people will become ill and blame the illness on the vaccine. The vaccine will probably get blamed for causing heart attacks, miscarriages, and severe allergic reactions, even when the vaccine actually has nothing to do with them. Some people who are vaccinated are statistically guaranteed to suffer from one of these issues shortly after being vaccinated. That’s because, in the USA, every week there are 25,000 heart attacks[15], 14,000 to 19,000 miscarriages, and 300 severe allergic reactions (anaphylaxis). Numerous people who should know better will confuse “illness after having the shot” with “illness because of the shot”.
Even without media influence, humans simply aren’t good at intuitively estimating probabilities. Students of statistics are soon introduced to “the birthday paradox”. This states that given a group of 23 (or more) randomly chosen people, the probability is more than 50% that at least two of them will have the same birthday. If you pack 60 or more people into the room, the probability exceeds 99%. Most people estimate the odds much lower. This isn’t a genuine paradox but was given that title because the mathematical reality contradicts our intuitive estimates.
Plenty of tools exist for strengthening objectivity. For example, you can take the “outside view” (or “reference-class forecasting”[16]) to counter excessive optimism. This involves reevaluating your conclusion in the objective context of a class of similar projects, initiatives, or forecasts. That’s especially valuable when it comes to projects or initiatives that an organization has never attempted before, such as implementing an unfamiliar process technology. The devil’s advocate procedure can be another powerful friend of objectivity.
Other methods include prospective hindsight; decision trees and other tools of the decision sciences; checklists of framing effects and persuasion techniques; linear models embodying expert judgment (judgmental bootstrapping); structured argumentation templates; forecasting methods checklist; selection of disinterested experts; dialectical inquiry; role playing; auditing procedures such as review panels; and for inherently complex phenomena, agent-based modeling techniques.
Comprehensiveness
“A man, to be greatly good, must imagine intensely and comprehensively; he must put himself in the place of another and of many others; the pains and pleasures of his species must become his own.” — Percy Bysshe Shelley[17]
Why is it important to be Comprehensive? “We must do something!” That common reaction to any major perceived problem is natural but mistaken. Given the level of knowledge and the extent of resources available at any given time, we have a finite number of options for action. Every action incurs costs, and we have no guarantee that acting on any of those options will result in a better outcome than doing nothing. And, in a complex situation, doing something is quite likely to be a bad idea. That’s especially likely when that something involves the knee-jerk response of outlawing or controlling a technology.
Consider the case of genetically modified (GM) crops. A few notable exceptions aside, most environmentalist organizations have been against them ever since plants were first genetically engineered in 1983. (The first GM food to be sold was the “flavr-savr” tomato in 1992.) On its Web site, the Sierra Club declares:
In accordance with this Precautionary Principle, we call for a moratorium on the planting of all genetically engineered crops and the release of all GEOs [genetically engineered organisms] into the environment, including those now approved.
The campaign by the Sierra Club, Greenpeace, and others has been effective in Europe. No one was commercially growing GM crops in the UK by 2006 and the European Union continues to ban the import of genetically modified food. In the rest of the world, led by the USA, Canada, Argentina, and China, GM crops have been widely adopted. Most of the global soybean crop is genetically modified. By 2005, 222 million acres of genetically modified soybean, corn, cotton, and canola (rapeseed) were being grown.
In the minds of protestors, the spread of GM crops is to be feared and opposed. On its Web site, Greenpeace declares that this constitutes a “dangerous global experiment with nature and evolution.” They claim that genetically engineered organisms “pose unacceptable risks to ecosystems, and have the potential to threaten biodiversity, wildlife and sustainable forms of agriculture.” To these activists, being comprehensive means listing all the possible dangers they can conjure up: GM foods may be toxic; they may be allergenic; they might contaminate nearby conventional and organic crops; they might lead to the creation of superweeds.
An evidence-based view finds these concerns to be mistaken, vastly inflated, or based on inaccurate reporting. A comprehensive assessment of GM crops would give due weight to more reasonable (and far more modest) concerns. These include the sense in preventing the spread of pesticide resistance to weeds, in not allowing antibiotic-resistant genes into pathogens in the human gut, and in taking additional precautions when using genes from non-food organisms (allergic reactions are more likely).
Any reasonably comprehensive assessment of GM crops must fully consider their benefits. “Oh right, the benefits,” one can hear the activist sneer. “Benefits to greedy, giant corporations!” The many millions of people who have survived thanks to the remarkable tripling of food output since 1960 might see it differently. We face a similar challenge over the next 50 years, although starting from a higher baseline with a much higher percentage of land now under cultivation. Biotechnology is the only way to increase agricultural output to keep pace with consumption until—most likely—population stabilizes around 2064 to 2080.
After that point, we can look forward to applying further improvements in productivity to reducing our ecological footprint rather than boosting output. Plausible projections find that, over the next two decades, GM will keep food prices 15% to 20% lower than without them. Not only can GM crops deliver more output per acre, they can grow in areas inhospitable to regular crops.
We can see an example of this in the 2001 announcement that a group of scientists had engineered a transgenic tomato plant. Regular tomatoes will not grow if the water they absorb is much more than one percent as salty as seawater. These transgenic tomato plants flourished on water about half as salty as seawater. Crops that are more tolerant to salt, arid environments, heat, and cold could return to productivity millions of acres of impaired land.
If we increased the proportion of the global industrial wood coming from tree plantations from the current one-third we would have less need to cut natural forests. If the claims of an Israeli biotech company are valid, eucalyptus trees will be grown in just a quarter of the time needed for unenhanced trees, allowing more of them to be grown on less land. According to Roger Sedjo, a senior fellow at Resources for the Future, “all of the world’s timber production could potentially be produced on an area roughly five to ten percent of the total forest today.”[18]
Genetically engineered crops can bring numerous benefits beyond increased quantity. They could play a major role in reducing malnutrition by containing higher levels of crucial nutrients in common foods. We can also expect to see lower-calorie sugar beets, oil seeds with healthier nutritional profiles, and potatoes that soak up less fat when fried.
We use crops for purposes besides food and so can expect, for example, additional benefits from stronger, more resilient cotton fibers to brighter, longer-lived flowers. Thanks to work by scientists at the University of Georgia and the State University of New York who are figuring out how to insert blight-resistant genes, we may restore the American chestnut tree to forests where it hasn’t been seen for two generations.
Human genome pioneer Craig Venter is engaged in genomic research to use ”cellulosic” material—the stalks, roots, and leaves of corn and other plants—as an affordable source to make ethanol—a key strategy for replacing a large part of our oil consumption with biofuels. Cellulosic ethanol could be made from agricultural waste from non-food-producing plants grown on land otherwise not suited for cultivation. In the long term, Venter wants to move beyond ethanol and enable everyone to make hydrogen at home. To that end he’s working on “modifying photosynthesis to go directly from sunlight into hydrogen production.”[19]
People in the more developed nations may value even more highly genetically modified crops’ reduced need for heavy use of fertilizers, pesticides, herbicides, and fungicides. A transgenic corn approved by the EPA in 2006 with greatly enhanced resistance to rootworm beetle larva could reduce or do away with pesticide use on 23 million acres of land in the United States. In just the four years from 1996 to 2000, the adoption of GM corn reduced the use of pesticide in the US by over two million pounds and has better than halved pesticide spraying in China.
Any comprehensive assessment of a new technology and production method must consider not only direct effects but also how the new method affects existing and alternative methods.
Any comprehensive assessment of a new technology and production method must consider not only direct effects but also how the new method affects existing and alternative methods. GM crops might displace both mainstream agriculture and traditional “organic” farming. Even if you’ve never been inside a Whole Foods Market, you’ll have seen evidence of the strong demand for higher-priced organic foods in the space devoted to them at your local grocery shop. When consumers buy organically farmed produce they are buying a feeling of environmental virtue. That feeling is based partly on propaganda and wishful thinking—especially if the consumer is pleased to see labels boasting of the lack of any GM ingredients. Organic farming improves on the mainstream in some ways but brings problems of its own.
It’s true, for instance, that organic farming avoids using artificial fertilizer and may use less herbicide. But this approach uses a lot of manure and requires more ploughing. Organic farmers have to till the soil repeatedly, pick weeds by hand, or use propane blow torches to scorch weeds. The increase in these activities brings environmental consequences such as water pollution and food contamination. Similarly, low-input agriculture substitutes more land for fewer chemicals. Genuinely promising advances such as no-till farming—which can radically reduce chemical and soil runoff—are much easier and cheaper when combined with transgenic crops. As science writer Matt Ridley said, “The truth is that the organic movement made the wrong call on GM.”[20]
Symmetry
“Symmetry is a complexity-reducing concept (co-routines include subroutines); seek it everywhere.” — Alan Perlis
“The most general law in nature is equity—the principle of balance and symmetry which guides the growth of forms along the lines of the greatest structural efficiency.” — Herbert Read
Symmetry between natural and human-caused risks is part of the Pro-Action: Prioritize Natural and Human risks. The relevant text says:
The fact that a risk or threat is “natural” should not give it any special status. Treat technological risks the same way as natural risks. Avoid underweighting natural risks and overweighting human-technological risks.
The British House of Lords respected this principle when it said, “We need to look at the product, not the process.” So did the US National Research Council when stating in its overview that “the potential hazards and risks associated with the organisms produced by conventional and transgenic methods fall into the same general categories.”
In contrast, the continuing failure of the organic movement to embrace genetically modified crops serves as an example of contravening this guideline. Organic farmers clearly illustrate the asymmetrical treatment of substances altered through human agency and those not so altered—the latter usually labeled with the contentious and often misleading term “natural.” Even distinguishing between transgenic and conventional crops is problematic because it obscures the truth: that all of today’s crops are “genetically modified.”
As Matt Ridley put it, “They are monstrous mutants capable of yielding large, free-threshing seeds or heavy, sweet fruit and dependent on human intervention to survive.”[21] Wheat may seem like a perfectly natural and organic crop but it cannot survive in the wild. It is in fact a thoroughly engineered product shaped by early genetic mutations resulting in today’s species whose cells contain three whole diploid genomes, each originating in a distinct wild grass.
Along the way to this thoroughly modern wheat, plant breeders of the 1950s introduced new mutant genes for dwarfing and, in the 1960s, they deliberately mutated wheat genes by means of potent carcinogenic chemicals or exposure of seeds to gamma rays. Ridley notes that even some of the “organic” crops of today were produced this way. “Golden Promise, a variety of barley especially popular with organic brewers, was first created in the Harlow atomic reactor. Crops produced this way do not have to be tested for health or environmental risks.”
Close attention to the common use of “organic” and “all natural” labels shows up the rather arbitrary ways in which favored products are separated from “frankenfoods.” Organic farmers refuse to use synthetic pyrethroids (even though these are well-targeted and don’t persist) and abhor bt when genetic engineering has made it a part of the plant and will harm only pests. But they will spray bt in a way that reaches unintended species; they help damage the land and ocean of the Andes by importing mined Chilean nitrate and fish products as fertilizer; and some continue to use unforgiving, broad-spectrum sulfate insecticides.
What about treating technological risks more conservatively than natural risks when we’re talking about a technology that transfers genes between species? Matt Ridley concisely responded to that issue in his paper, “Genetically Modified Crops and the Perils of Rejecting Innovation.” He points out that many crops arose by hybridization; that humans have far more genes in common with other species than we used to think—and that the commonest gene in our genome comes from the reverse transcriptase retrovirus; and that the crossing of species lines by genes is business as usual in the world of bacteria. Ridley concludes that “it is arbitrary and irrational to say that only the gene transfers that Mother Nature happens to do are safe and others are not.”
Prioritize
“There must be more to life than having everything!” — Maurice Sendak
Given that we have limited resources at any point in time, the other aspect of the tenet of Prioritize Natural and Human Risks should be easily accepted. I say “should” because a surprising number of otherwise intelligent people sneer, snarl, or sniff at prioritization in the kinds of cases we’re considering. We shouldn’t have to prioritize, they complain. After all, prioritizing means not only saying what we do first but also what we should refrain from doing (until later, if at all).
Many politicians cannot resist pandering to this wishful thinking. “Elect me and I will ensure universal high-quality healthcare, universal access to the best universities, much stronger national defenses. I will reduce taxes and increase spending on Medicare and Social Security.” Indulging our wishful desire to reject prioritization only succeeds in shrinking down our ability to improve the world.
The prioritization tenet works closely in conjunction with Be Objective and Comprehensive. Why? Because you don’t want the priorities used by policy makers to be those only perceived as important. Priorities should be evidence-based. Consider the way activist organizations have become more aggressive in focusing public pressure on corporations. Some of these activists target companies not because these companies have a major effect on the problem but because they are especially visible or successful. This makes those companies convenient and effective tools for attracting attention. As Michael Porter and Mark Kramer noted:
Nestlé, for example, the world’s largest purveyor of bottled water, has become a major target in the global debate about access to fresh water, despite the fact that Nestlé’s bottled water sales consume just 0.0008% of the world’s fresh water supply. The inefficiency of agricultural irrigation, which uses 70% of the world’s supply annually, is a far more pressing issue, but it offers no equally convenient multinational corporation to target.[22]
The Copenhagen Consensus provides an excellent example of a vastly preferable approach—one that exemplifies this tenet of the Proactionary Principle. When statistician Bjørn Lomborg brought together a group of eminent economists, the goal was to draw on the best available information to create a ranked list of the highest payoff solutions to current global crises. The experts compiled a list of global challenges in the areas of economy, environment, governance, and health and population. Using cost-benefit analysis, they then identified opportunities and made cost-benefit estimates. Finally, the economists’ rankings were integrated to form the Copenhagen Consensus’s overall ranking of global measures.[23]
They determined that the best measures for tackling communicable diseases were control of HIV/AIDS and control of malaria. Malnutrition and hunger could best be addressed first by providing micronutrients and by the development of new agricultural technologies. Reducing low birth weight promised a lesser benefit. Three proposals for sanitation and water were ranked as Good (but not Very Good) opportunities, including community-managed water supply and sanitation. Measures to address climate change came out as Bad opportunities—solutions such as the Kyoto Protocol and carbon taxes whose costs outweighed their benefits.
Prioritization makes for sanity in our personal life choices too. Suppose you wanted to know how to reduce your risk of death by a certain percentage for the least effort. If you turned to the big study by the Harvard University Center for Risk Analysis[24] you would discover that your risk of dying would increase by one-millionth if you:
drank a pint of wine
traveled 10 miles by bicycle or 300 miles by car or 1,000 miles by jet
drank 30 cans of diet soda
had one chest x-ray in a good hospital
lived for 150 years within 20 miles of a nuclear power station
lived for two days in New York or Boston
smoked 1.4 cigarettes
Some of these are probably safer that this claims because, for example, the nuclear risk is based on a false linear no-threshold approach.
Another of the prioritization tenets says: “Give priority to addressing known and proven threats to human health and environmental quality over hypothetical risks.” Hypothetical risks differ from other unlikely threats. Importantly, they differ from evidence-based threats bearing low-probability but drastic consequences—catastrophic events such as an asteroid impact.
Make Response and Restitution Proportionate
The tenet of Make Response and Restitution Proportionate is the flipside of the previous one. Prioritizing emphasizes the wisdom of applying any restrictive measures first to the most serious dangers; proportionality emphasizes the wisdom of not overpaying for what you get: Large, more probable dangers merit more effort, more resources, and potentially more restrictive measures than smaller, less likely dangers. When we apply the Proactionary Principle for making the most of opportunities rather than avoiding risks, parallel reasoning applies: Select opportunities that offer the largest, surest payoff for the smallest investment and the lowest risk.
In considering how to prioritize, we saw a comparison of behaviors, each of which raise your risk of dying by one-millionth. If we’re making decisions on a social scale, what is the cost of saving one life per year? It’s miniscule (less than one dollar) for requiring smoke detectors in homes and for immunizing children for measles, mumps, and rubella. The same saving of human life would cost $810 for mammograms for women age 50, $3,100 to chlorinate drinking water, $14,000 to screen blood donors for HIV, $180,000 for first aid training for drivers, $2,800,000 for passenger seat belts in school buses, $180,000,000 for radiation emission standards at nuclear power plants, and a stunning $20,000,000,000 for benzene emission control at rubber tire manufacturing plants.
People in the future will look on our time with sadness and horror when they reflect on a monstrous harm—one that we have done practically nothing to tackle. They will wonder how we could avoid confronting this merciless, relentless, colossal enemy.
People in the future will look on our time with sadness and horror when they reflect on a monstrous harm—one that we have done practically nothing to tackle. They will wonder how we could avoid confronting this merciless, relentless, colossal enemy. If you combine the tenets of prioritizing and proportionality, then the identity of this ancient and terrible foe should be evident: aging and death. When it comes to dying, today is a very bad day. Of course, every day is very bad because between each rising of the sun 100,000 people die of causes that rarely kill the young. In about the time it takes to read this sentence aloud, a dozen people died around the world.
The sheer magnitude of “natural” death should spur us to push it to the top of our priorities and to take strenuous measures to extend lives—to delay or abolish involuntary death. Reasonably reliable numbers now exist for the death toll in 2001. Counting across all 227 nations on our planet, the number of victims that year amounted to almost 55 million people. Take out those directly killed by accidents, suicides, or war and we arrive at 52 million “natural” deaths. As Robert Freitas has put it:
Even the most widely recognized greatest disasters in human history pale in comparison to natural death. The Plague took 15 million per year, World War II, 9 million per year, for half a decade each. The worldwide influenza pandemic of 1918 exterminated less than 22 million people—not even half the annual casualties from natural death. We can only conclude that natural death is measurably the greatest catastrophe humankind has ever faced.[25]
Restricted to a purely economic perspective, the scourge of death still assumes stunning proportions. Natural death brings with it an unparalleled destruction of wealth. Using the average of a dozen studies on the economic value of a human life, Freitas calculated that each human life lost represents a loss of around $2 million. Assume, very conservatively, that the global population age structure and the age-specific mortality is the same as that of the United States. Even so, 52 million natural deaths amounts to an economic loss of around $100 trillion dollars every year. That’s three times larger than the entire world’s annual economic activity. As Freitas states, “Natural death is a disaster of unprecedented proportions in human history.” So what are we doing about it?
Anti-aging research should be a top priority—probably the top priority. A proportionate response, considering the unmatched magnitude of the destruction and the potential payoff, would be vastly larger and more vigorous than we have today. In the case of anti-aging research, we see a clear situation where a relatively tiny investment is required for a massive return. Disagreement exists over the probability of a payoff. Growing evidence-based optimism among gerontologists combined with the magnitude of the payoff nevertheless makes it obvious that we are devoting disproportionately few resources and effort to defeating aging.
If the goal of controlling and defeating aging seems too ambitious, consider the enormous value of even a scaled back objective. We have already slowed aging in a range of species, the favorite beneficiaries being mice and rats. Suppose a vigorous anti-aging research initiative did no more than slow human aging to the degree already achieved with rodents. The resulting extension of healthy life expectancy[26] would exceed that of abolishing cancer, cardiovascular disease, and adult-onset diabetes.
Consider an intermediate goal between what has already been done on the one hand, and the ultimate goal of total control over the aging process on the other – what Aubrey de Grey calls “robust mouse rejuvenation.” This means extending the lives of already long-lived mice, starting at two-thirds of their life expectancy and tripling their remaining lifespan. De Grey, who has mapped out the challenge in detail, figures we would have a 90% chance of achieving this goal within ten years, given $100 million per year of funding.[27] That amount is probably too small to even show up on any statement of government programs and would be quite manageable by many individual private parties. Our prioritization of this goal would scarcely be affected even if de Grey’s estimate was off by a factor of a hundred.
Even the most coldly calculating economists might be moved to tears of joy as they consider what would accompany victory over aging.
Given the size of today’s global population, if our increased effort brought the arrival of an anti-aging treatment forward by only a few years, eventually we would have saved more lives than have been destroyed by all the wars we’ve fought since our species began. Even the most coldly calculating economists might be moved to tears of joy as they consider what would accompany victory over aging: increased economic activity and improved public finances as the need for pensions and Medicare goes away. The enormous flow of resources into medical care would abate since the old, frail people who consume most of those resources would no longer be old or frail.
Despite all this, funding is hard to come by for research on the biological control of aging and longevity. Most scientists shy away from it and national research priorities exclude it. Even the otherwise excellent Copenhagen Consensus failed to include longevity research among its candidates for benefiting humanity. The case of anti-aging research not only illustrates the tenets of prioritization and proportionality. It also highlights the way in which the Proactionary Principle guides the assessment of opportunities every bit as effectively and systematically as it assesses risk.
Embrace Diverse Input
“Where there is an open mind there will always be a frontier.” — Charles F. Kettering
The Proactionary Principle’s precept Embrace Diverse Input (or “be open to independent thought”) could be thought of as contained with the injunction to Be Comprehensive and Objective. It asks decision makers to: “Take into account the interests of all potentially affected parties and keep the process open to input from those parties or their legitimate representatives.” Remaining open to input from all those potentially affected by a decision enriches cognitive diversity and expands the range of options considered. The principle of openness to input deserves stating independently because of the potency of intellectual openness and because of how easily and frequently it is ignored or flouted.
Two high-stakes episodes during the presidency of John F. Kennedy demonstrate two things: The dangers of making decisions in a closed, isolated environment, and the advantages of making decisions in an intellectually open environment. Political science and social psychology textbooks often cite the first of these episodes, the 1961 Bay of Pigs invasion, as a classic example of groupthink.
The idea was for the United States to fund and train a group of Cuban exiles to invade Cuba and set off a revolution against Castro’s regime. Kennedy went ahead because of the CIA’s confidence in the plan. According to a detailed memo later written by the CIA, the plan depended on several beliefs that turned out to be mistaken. Crucial among them was that Cubans would be thankful to be liberated from Fidel Castro and would quickly add their active support. Mass arrests and some executions by Castro prevented this from happening.
Why did none of President Kennedy’s top advisors speak out against the plan? That team of advisors exemplified Irving Janis’s definition of groupthink: “A mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.” These advisors shared the same educational background and each felt motivated to hold their tongue to avoid upsetting the president. Groupthink was reinforced by Robert Kennedy’s role as what Irving called a “mind guard.” The younger Kennedy discouraged dissent by saying the president had already made up his mind. Further, they didn’t seek out the views of military leaders or other outside experts on the soundness of their strategy. The suppression of dissent ensured that the fatal flaws in the Bay of Pigs invasion would not be exposed.
As Irving Janis related in Victims of Groupthink[28], the Bay of Pigs debacle spurred Kennedy to order a review of the foreign policy decision-making process. To the president’s credit, he absorbed and implemented the findings and succeeded in avoiding groupthink during the Cuban Missile Crisis. Among the beneficial measures, participants in discussions were urged to see themselves as “skeptical generalists” rather than as partisan representatives of particular departments; Robert Kennedy and Theodore Sorensen were appointed as intellectual watchdogs and charged with unearthing all possible disagreements, weaknesses, and untested assumptions; task forces were instructed to throw out rules of protocol and deference to rank; the group was divided into sub-groups to develop a variety of options, thereby reducing group cohesion; and Kennedy stayed away from some early meetings of the task force so that he wouldn’t influence the opinions of others.
The Bay of Pigs crisis, the Challenger Space Shuttle disaster, and a distressing number of other major decision situations add up to what has been characterized as a process of advocacy rather than one of inquiry.[29] When we critically examine the decision process we can usually detect the presence of most of Janis’s eight symptoms of groupthink. These include an illusion of invulnerability that leads to acceptance of extreme risks; collective rationalization; a belief in inherent morality that allows members of the group to ignore the moral consequences of their decisions; stereotyped views of out-groups; direct pressure on dissenters; self-censorship; an illusion of unanimity; and self-appointed “mindguards”—in which the group and the leader are shielded from information that might undermine the group’s cohesiveness, view, or decisions.
Even after a decision has been made and implemented, additional benefits to being open to input are possible. The quality of a decision or a forecast is likely to be higher if the intention from the start is to comprehensively report the methodology used. In his study of pre-election polls in the Israeli press, political scientist Gabriel Weimann found a strong correlation between the accuracy of polls and how thoroughly they reported deficiencies in their methodology.[30] That’s not too surprising. Those who point out the limitations of their reasoning process are likely to be those who are the most careful in that process.
Respect Diversity in Values
The tenet, Embrace Diverse input, also urges decisions makers to “Recognize and respect the diversity of values among people, as well as the different weights they place on shared values. Whenever feasible, enable people to make reasonable, informed tradeoffs according to their own values.” Diversity of input is mutually reinforcing with openness: Openness in the decision process allows diverse voices to be heard and considered. It also coheres well with Be Objective and Comprehensive: A comprehensive survey of options helps put that chorus of voices to work in finding solutions.
By respecting diversity in values we affirm the wisdom of centuries of Western political philosophy. That tradition champions individual rights and upholds the sovereignty of individuals. In rejecting absolutism it opens ways to increase the well-being of society as a whole and to reduce conflict among its members. Respecting value diversity represents an ideal for us to approach but we may not always be able to realize it one hundred percent. That’s especially true for decisions made by public authorities that tax all of us to deliver services and control behavior—decisions with which we may not agree.
How can we most reliably and effectively put this tenet into action? We can resist those who—knowingly or not—push on us their preferences dressed up as absolute values. As we have seen, the precautionary principle is a prime example of this subterfuge. In the words of Hanekamp and Verstegen:
When precautionary policies are devised, all for the benefit of European citizens, then a ‘true value’ of human and environmental wellbeing is assumed. This ‘true value’ carries utopian overtones. Resisting precautionary regulation is branded as irresponsible. This means certain parts of society define and impose on others their conception of human health and environmental quality and the maintenance thereof. Precaution thereby tends to empower supranational bureaucratic organizations. It resembles enlightened absolutism.[31]
We can also acknowledge this tenet by recognizing opportunity costs. The opportunity cost of a choice is its cost in terms of the next most highly valued opportunity forgone and the benefits that would come with it. When politicians pledge to “do everything possible” to enact a policy, they treat that policy as an absolute and fail to consider its opportunity cost. Of course, such politicians are demagogues who pander to voters who say they want to see some social objective achieved regardless of cost. It’s easy for us to do that in the abstract—or when we don’t see the cost of the choice.
An old Spanish saying suggests an excellent curative for this: “Take what you want,” said God. “And pay for it.” Markets embody that divine instruction by requiring us to pay for our choices. Market prices are determined not by stated preferences (such as you might express when being surveyed) but by revealed preferences—economist-speak for preferences that you act on. By requiring us to pay for our choices markets educate us about their costs. At the same time, markets provide a platform that enables tradeoffs: we can part with a good in exchange for another that we value more highly.
In the case of producer goods this means resources tend to go to those who can produce the most value with them. We can see a relatively new example of this in the formation of emissions trading markets. These set a cap on the amount of a certain pollutant or gas (such as carbon dioxide) but do not specify how much of that total is allowed for any particular country or company. Instead, rights to fractions of the total are traded on the market. The least efficient producers—those that pollute a lot for relatively little output—will have a financial incentive to sell their pollution rights to more efficient producers—those that pollute less for the same or more output, and to those who can figure out how to reduce their current levels of pollution.
Markets don’t always do a good job in representing our preferences and in guiding economic activity accordingly. When they don’t, it’s because of the inability or failure of government and the legal system to clearly define property rights. When critics wag a finger at “market failure,” their targets typically are not smart markets but undeveloped and poorly defined markets or markets that have been crippled by ill-advised regulation. We can make markets “tell the ecological truth”, for instance, by crafting property rights to incorporate all the ecological costs of an activity.
Simplify
“The ability to simplify means to eliminate the unnecessary so that the necessary may speak.” — Hans Hofmann, Introduction to the Bootstrap, 1993[32]
Making complex decisions using a sophisticated approach such as the Proactionary Principle will call for some intricate thinking. That doesn’t mean making things any more difficult than necessary. As part of being objective, as called for by the first tenet, we should seek to simplify where reasonable.
When we’re generating alternatives, making forecasts, and choosing among options, we should use simple methods—unless more complex methods improve accuracy enough to be worthwhile. Anyone who has used Occam’s Razor to shave away layers of unnecessary assumptions will tell you that simpler hypotheses and methods have fewer ways of going wrong. This implies adapting the Proactionary Principle to the situation, working through its guidelines more or less elaborately depending on the gravity of the decision and the time available to make it. Simple decisions call for a quick and dirty application of the Principle.
The virtue of simplicity in method is intuitively plausible enough. In the case of methods for forecasting we can support intuition with empirical evidence for the benefits. In Principles of Forecasting[33]—a superb source of wisdom on the topic—Scott Armstrong sums up the evidence as showing that “simple methods are generally as accurate as complex methods.” This conclusion comes from studies of judgment, extrapolation, and econometric studies. He also notes that simplicity “aids decision makers’ understanding and implementation, reduces the likelihood of mistakes, and is less expensive.” Among the many evidence-based principles in the book, two are especially relevant: 15.1: Present forecasts and supporting data in a simple and understandable format. 15.2. Provide complete, simple, and clear explanations of methods.
Revisit and Refresh
The last of the five tenets of the Proactionary Principle is Revisit and Revise. This recommends doing something that may seem obvious: Create a trigger to prompt decision makers to revisit the decision, far enough in the future that conditions may have changed significantly, but soon enough to take effective and affordable corrective action. Getting in the habit of tracking assumptions, forecasts, and decisions and comparing them to actual outcomes enables an organization to learn from its mistakes.
You might think this was common practice. Nobel Prize-winning economist Daniel Kahneman agrees that both individuals and groups need mechanisms to review how they make decisions. He has found executives to be impressively interested in the issue but also highly resistant to learning from their mistakes by keeping track of decisions. When it comes to setting up a system to evaluate a record of biases, errors, and off-base forecasts to create a more rational process, “they won’t want to do it”.[34]
All decisions imply forecasts, implicitly or explicitly. Yet decision makers give little if any attention to reviewing the accuracy of forecasts or forecast methods. Their informal assessments of past forecasts tend to be biased. In part, this is because forecasts are made with foresight but evaluated with hindsight. When you know what has happened you will exaggerate how predictable events were. On top of hindsight bias, forecasting often suffers from ambiguity. Ambiguity makes it hard to know precisely what was predicted or how accurate the predictions have been.
In light of these problems, Baruch Fischhoff[35] has formed guidelines for providing forecasters with feedback that is prompt, unambiguous, and designed to reward accuracy. His guidelines embody the wisdom of structure because they add up to a formal review process to use before and while making forecasts and when evaluating them.
We should be disturbed and annoyed at our tendency to make decisions with grave consequences about new and emerging technologies in what can fairly be described as a sloppy manner. With the Proactionary Principle in place, it’s time to delve deeper into the extent of and limits to our knowledge of the future.
[1] Kelly 2008.
[2] Isaacson 2003.
[3] Franklin 1768.
[4] Franklin 1753.
[5] Nordhaus 2001.
[6] There are also benefits, typically ignored. Far more people around the world die from expose to cold as compared to heat exposure. Estimates range from three times to nine times.
[7] Lomborg 2001b.
[8] 2023 note: Even more unforgiveable, against all historical and economic evidence, people believe that regulations will achieve precisely what they are intended to achieve.
[9] Kelly 2008.
[10] See Armstrong 2001 for a comprehensive survey of evidence-based methods.
[11] Brand 2005.
[12] Lorenzini 2005.
[13] Witzel 2005.
[14] Combs and Slovic 1979.
[15] From http://www.msnbc.msn.com/id/33045346/ns/health-swine_flu/
[16] Kahneman & Tversky 1979.
[17] Shelly 1821.
[18] Sedjo 1995.
[19] Douthat 2007.
[20] Ridley 2006.
[21] Ridley 2006.
[22] Porter & Kramer 2006.
[23] Lomborg 2013; Lomborg 2015.
[24] Wilson 2001.
[25] Freitas 2002.
[26] Miller 2002.
[27] De Grey 2004.
[28] Janis 1972.
[29] Garvin and Roberto 2001.
[30] Weiman 1990.
[31] Hanekamp and Verstegen, 2006.
[32] In Efron & Tibshirani 1993.
[33] Armstrong 2001.
[34] Schrage 2003.
[35] Fischoff 2001.