The Superintelligent AI Discussion
Some discussion points
On January 9, I will be discussing major issues surrounding artificial general intelligence (AGI) and artificial superintelligence. Before the main discussion, we will hear statements from Peter Voss, Ray Kurzweil, and others. The debate/discussion that follows includes Eliezer Yudkowsky and Anders Sandberg. What follows is not my usual essay but some bullet points of issues I want us to cover.
Please sign up for the online event
A point of agreement (I think): Major panics are almost always groundless or misguided. Centralized regulation has serious unintended consequences. Where Eliezer and I disagree is on whether AI is an exception.
In his book (with Nate Soares), If Anyone Builds It, Everyone Dies [“IABIED”], Eliezer makes his argument specifically about approaches to AI that are similar to the currently dominant approach. His arguments do not apply to entirely different approaches such as “cognitive AI.” This is a crucial point. If his argument is incorrectly taken to apply to ALL AI, we risk shutting down or severely constraining vital AI advances.
It is not at all obvious to me that LLM AGIs (rather than a different type) will achieve meaningful superintelligence. AI needs common sense, physical reasoning, and world models. Current architectures handle these very poorly. A different approach may be needed.
When might a non-LLM approach lead to superintelligence? I have friends telling me “in the next couple of years” but I don’t know. Before the currently dominant paradigm, classical AI approaches were tried for decades. It might be quite a while before superintelligence.
Especially when it comes the current approach to AI, I question the usual assumption that AI achieving human intelligence will rapidly become increasingly superintelligent. Is intelligence a property that can be rapidly improved without dependence on its environment and context? Practical intelligence is not a property of isolated individual minds. It requires a lot of cooperation to get things done. That introduces much friction and likely prevents extremely rapid takeoff.
If the concern of people such as Eliezer is a bad outcome from the current approach and since it is highly unlikely that anyone can stop development of LLMs/transformer-based systems – should we not be pushing for more development of different kinds of AI?
To what extent is it feasible to develop superintelligent special purpose AIs but not AGIs? Limiting the size and power of computing centers does not distinguish between them.
In IABIED, the worrier’s “solution” involves global authoritarian government. Somehow, governments will work together to a common end, with no disruptions from political changes. The downside of such a regime – if it were possible to make it work – would be awful.
In IABIED’s account of how SAI would take over proceeds as if (a) humans are not monitoring and are doing nothing to respond, (b) no humans+AI are responding. In reality, achieving not only SAI but its ability to control the physical world would not happen anything like as rapidly as the doomers suggest.
The path from superhuman cognitive capabilities to superhuman control over the world will be lengthy and difficult, not fast and simple. As one writer put it, you are much more intelligent than a cat but you may still find it difficult to force a cat into a carrier if he doesn’t want to go.
Heavy regulation is unlikely to stop AI (because people will not want to stop it) but will centralize its capabilities, which we should try to avoid, especially if the control centers are governments. Reasonable protective measures can be taken at various levels, including the use of liability law that already exists.
If LLM-AGIs continue to be internally mysterious and continue to have problems, do not give them unfettered access to such things as missiles, financial markets, power plants, etc. These facilities can be automated but as systems entirely distinct from the SAIs.
The paperclip maximizer idea seems particularly silly. (IABIED does not depend on this.) Contrary to the “Orthogonality Thesis”, complex minds are likely to have complex motivations.
Finally, the human-destroying AGI sounds a lot like the stories of genies. Both are all-powerful and give you what you request but interpret your request in a way that you regret. The evil AI also sounds like the Devil (or other destructive supernatural entities). It appears to fit the religious “hole” or receptor in our brains. This doesn’t make the claims false but should make us cautious.
I’m looking forward to a vigorous discussion. Join us!



Some good points about AI and SAI development and regulation from Max. Ambivalence exists at every level of intellectual growth about how to optimally deal with AGI. Eliezer Yudkowski has been obsessively "handwaving" over this for over 25 years, but that doesn't mean he is correct in calling for a virtually impossible to police international ban on anything. We have a president flouting pretty much every norm, social, and legal contract, and we as a country can't seem to control him. So stopping SAI seems unlikely, but the authoritarian steps to try could be truly awful. It is helpful to recall the doomsaying about Y2K, and other threats. Maybe we'll get through this "great filter" and matriculate to a Universe spreading civilization.