Photo by Michael Dziedzic on Unsplash
Gary Marcus writes: “The conventional wisdom these days seems to be that AI regulation will stifle innovation, and that it should be avoided at all costs. What if that conventional wisdom were wrong?”
First of all, that hardly seems to be the conventional wisdom. The conventional has long been appalling favorable to regulation. This seems to be driven by a lack of understanding of how regulation works. Advocates see a problem (real or just something they want to change). They write up a regulation to deal with it. Bang! Problem solved. Except the problem is not solved.
Second, like so many other commentators on the topic of AI risk (and other topics), Marcus fails to distinguish between regulation and (universal) law. Absence of regulation does not mean that anything goes. If you are concerned about AI leading to new instances of misrepresentation, defamation, or fraud, all you need to do is apply existing law. You don’t need a new regulatory agency and complicated new regulations. In a sensible legal system, judges can affirm the application of the same, universal legal principles to AI. This should be possible in the USA. If our legal system has been so badly damaged by regulation and excessive litigation, it might at most take a law being passed that simply makes that same affirmation.
A little more refinement might be called for since an AI agent differs somewhat from a human agent. When a human has another human act on their behalf and fraud or another harm is inflicted, both are liable. If you think a particular AI agent is less independent than a human equivalent, you will hold the order-giver more responsibility and culpability. If you think an AI agent is more independent (and capable) than a human in a comparable position, you will hold the AI more responsible. Either way, universal law applies and regulation is unnecessary. We don’t need an Office of AI Defamation or whatever.
Friedrich Hayek adapted two Greek terms to distinguish between the two kinds of order: “Classical Greek was more fortunate in possessing distinct single words for the two kinds of order, namely taxis for a made order, such as, for example, an order of battle, and cosmos for a grown order.” Approaching AI with general rules means respecting the emergent order of cosmos. This allows for change and the unexpected. Emergent or spontaneous orders are more resilient, embody distributed knowledge in a way that a centralized body cannot, and are the only way to manage complex systems. Imposed order or taxis can work on a smaller scale but does worse as complexity increases because it cannot make effective use of widely distributed information. The more complex the AI ecosystem becomes, the less it can be sensibly managed by taxis; instead the order must be maintained “only indirectly by enforcing and improving the rules conducive to the formation of a spontaneous order”, as Hayek put it.
Marcus says that “regulation doesn’t always stifle innovation”. This seems to admit that it usually does, which is a start. If we see AI as an especially important issue and we acknowledge that regulation often stifles innovation, shouldn’t we have a very strong argument for why AI regulation is different before rushing to impose it? In other words: Do you feel lucky? Do ya, punk?
It is true that regulation can promote innovation in one direction. Throw enough resources at something and narrow the uses to which they can be put and you might get faster or better results. Maybe. Even if you do, you’ll get those results at the expense of probably better or more cost effective alternatives. The Soviets promoted the development of heavy industry and may well have built capacity faster than otherwise (although they never considered free markets as an alternative) but this came at enormous expense in the form of forced migration, forced labor, lower living standards of regular people, and less advance in other areas. Marcus and those who make the same argument are falling into the broken window fallacy. They are focusing on what is seen and ignoring what cannot be seen.
A regulation may succeed in promoting one goal but always do so at the cost of other goals and values. It forces this distortion on people by taking it out of the realm of choice. “Regulation around the environmental impact of cars has spurred advances in electric cars.” So have subsidies. But at what cost? The extra incentives are not free. The fact that few people would buy electric cars even today without subsidies shows that these incentives cost us and force us away from other ends we value.
Even when regulation “works” (furthers one objective) it not only imposes other costs, it tends to stray from even that one goal over time. The subjects of regulation become the beneficiaries. In fact, if you’re really worried about AI being smarter, you are giving AI a powerful lever to control others by regulating them. Humans capture regulatory agencies, so SAI will be easily able to do so.
The AI regulation that Marcus wants amounts to industry policy. Fans of industrial policy – what I prefer to call “coercive economic manipulation” – used to point to Japan’s MITI and its Fifth Generation Computing initiative as a supposedly successful example… until it became an obvious failure and was quietly forgotten.
Marcus also says, “This regulation may force the Chinese tech community to solve a version of the alignment problem…” China may work hard on a form of alignment – compulsory alignment with the goals and beliefs of the Communist Party – but is that really want you want to hold up as a model and to emulate? ChatCCP?
The distinction between cosmos or spontaneous order or universal principles of law and taxis or organization or imposed order relates to constructivist rationalism on which I’ve recently added several blog posts.