5 Comments

Hey Max,

In a rare case of algorithms getting something right, Notes threw this piece up at me and I've decided - with some trepidation - to comment. The cause of this nervousness is simply that the other Max typically encounters my comments as a fly in his primordial soup, and I rather suspect you will too. Nonetheless, here I am! And let me say, I am commenting in part out of respect to your anarchist roots, since I usually don't want to talk to people working in your general space for reasons that will become clear. But Michael Moorcock has been central to my philosophical development, and as such I have great respect for anyone grappling with the problems of anarchy. It calls me and I resist; it is practically my life's story.

Prologue aside, I find this entire essay akin to a stone that skims across the water and never goes in - there's a joy to that, but it misses the satisfying 'plop'. Let me preface any further remarks by saying that my background is in AI - my Masters degree was in this field, although by PhD I had veered into philosophy, where I remain, and remain (as I like to say) an 'outsider philosopher'. I say this not to puff up my feathers, but to explain where I am coming from as I feel it relevant to understanding what I have to say that I am a heretic who fled the clerisy.

On your downplaying of existential risks, I concur - sort of. I mean, the existential risks are more absurd than you seem to think, but you are wise to deflate in this regard. Current AI is still built on the same bag of tricks I studied in the 90s, the sole differences are the improvements in computing power, the availability of vast quantities of human data to train the neural networks, and the plumbing to get more variety out the other side. I remain, as I did then, woefully under impressed with this fuss over AI. We certainly may destroy ourselves with our technology (in a sense, we already have, but let's leave that tangent aside) but there is no risk here greater than the one we already shouldered with nuclear weapons, and the illusion that there is depends upon overhyping the difference between (actual) AI systems and (imaginary) 'Artificial General Intelligence', or 'super-robots'.

The true risk with current AI is not existential at all, at least in the sense of extinction. It is political and social. We already exist on the edge of political catastrophe, or perhaps have fallen off that edge, and the transplanting of discourse into social media represents one of the greatest political threats of our time. The danger of AI is in turbo-charging censorship even further than it has already been driven by the US federal agencies such as the CISA (the absurdly titled Cybersecurity Infrastructure Security Agency - because one 'security' is never enough when you are playing in the totalitarian toolbox!). AI already successfully silences what is unwelcome by the ruling powers, which has put the sciences deep into the state of pseudoscience where the engine of validation fails and all that is left is commercial opportunism. This is the most plausible risk of AI: the entire control of social narrative in a world where all discourse is online, along with the termination of scientific practice.

Hence, indeed, calls for regulation. I don't know how much of this is naivety (the great houses and guilds of today have no shortage of it) and how much of this is gentle pushing by enthusiastic spooks, but inevitably (as you seem to correctly intuit) we're hearing calls for regulation that amount to monopolising AI for State and commercial benefits - primarily censorship. The good news in this regard is that this cannot entirely succeed for the same reason that 'the black library' (torrent file-sharing) could be pressured but not eliminated (good news for cypherpunks, too!). The trouble with technological regulation right now, anyway, is that there is no framework by which it can be pursued, because none of the contemporary empires have any interest in co-operation. So talk of regulation can only be calls for building new monopolies on old monopolies. Hard pass.

The strange turn in your thinking - which is not strange to you, because of your faith in technology I suspect - is that you suggest that what AI can bring us is the toolkit to extend life, which you view as eucatastrophic in the non-Tolkien sense. But life extension is merely catastrophic, there's nothing good here except the fantasy. I doubt I will persuade you, since you are committed to this path, but what could be worse at this point than extending the lives of the wealthy (the proximate consequence of such tech) except perhaps extending the lives of everyone...? If you wouldn't call for raising the birth rate by an order of magnitude, you cannot reasonably support collapsing the death rate by an order of magnitude either. Maybe you've already written on this - if so, a link is welcome rebuttal!

Everyone who inhabits this fictionality surrounding life extension has to wobble around the trade off i.e. that life extension must come at the sacrifice of births - and what do you think the regulation of this would look like...? I am unconvinced by the 'I'd gladly trade longer life for no kids' crowd, because I see a nest of vipers along this path if it works on the honour system and something far worth if governments get their claws into it. Honestly, I don't quite understand why death has got such a bad rep in the technocratic paradigm except that we traded the old mythologies for new ones and never gained the insight into them that the mid-twentieth century philosophers foolishly believed we had gained. We didn't. Perhaps we never will.

The good news (for me, for your side it is bad news I'm afraid) is that the problems required to solve in extending life are not ones that can be addressed solely by throwing AI into the gap. Just as the 'mapping of the human genome' was an excellent boondoggle for those selling gene sequencers with next to zero benefits to anyone else, AI can do zip with any amount of genetic data, because we haven't even come close to building a 'genetic computer'. Genetic 'code' is not analogous to computer code at all (except in the metaphorical sense of 'hacking', which is accurate to what we can do with it). Most likely way of extending life expectancy within the next few centuries is creating cold-blooded humans (literally, we already have the metaphorical kind), but this would have to be inflicted at birth, and would come with the lifestyle modifications associated, and quite possibly a massive step down in intellectual prowess (as if we weren't already dealing with this...).

So alas, here I am, most likely as a gadfly where I am not welcome. Evidently, I am a technological deflationist - no-one is more surprised at this than I, I assure you! I gorged on sci-fi in my youth and one does not take a Masters in AI to pop balloons. But once you've been to the factory, you are no longer tempted to drink the Koolade because you've seen what it is made from (which is literally true, by the way, as well as being metaphorically true).

Because this feels like the right thing to do in conclusion, let me leave you with a link for one of June's Stranger Worlds. These are only 750 words (3-minute read), and thus shorter than this comment(!). It captures rather well how I feel about both AI and imaginary super-AI:

https://strangerworlds.substack.com/p/laws-of-robotics

It is also, I suppose, an invitation.

With unlimited love and respect,

Chris.

Expand full comment

I do hope you are correct: I suspect strong AI is a prerequisite to my being successfully revived from cryonic suspension. I firmly agree with everything you say about the benefits that will accrue if we can just get it right, and the hugeness of the missed opportunity if we decide the risk is too great but are mistaken.

But S-curves notwithstanding, I see no reason to believe that human-level intelligence is as high as you can go. And you can ask the Neanderthals or Homo Erectus how well it works to have somebody around who is smarter. Or you could, if they were still here. So I’m gonna stay worried.

Expand full comment
author

The Neanderthals did not know how to work with a new species (or sub-species) and couldn't integrate with them physically or functionally. Nor did they have any way of shaping the emerging sub-species. Our situation is quite different.

Expand full comment

Yeah. But if they had tried -- and who's to say they didn't? -- they would soon have found that we were shaping them instead.

Your discussion of Obstacles to AI Doom is not compelling, and smells to me like wishful thinking. I'm not impressed with either the Drake Equation or its AI analog, because they are invariably garbage-in-garbage-out; we have little reason to assume any particular value for the inputs, and the Drake Equation has been used both to argue that we are unique and that life is plentiful. You say, "I suspect that the 'alignment problem' will turn out to be something different than is being discussed and more tractable." Maybe. I hope so. It would be great if it were an engineering problem, but we're imagining something at least as smart as humans, and attempts over the years to engineer humans have not met with great success.

Understand that I share your contempt for the Precautionary Principle and in almost every other context I am all for your Proactionary Principle. I would love to be convinced, and have read widely in the hope of being convinced, that AI is just like every other challenge we face, but so far I have not been.

You say, "I’m willing to tolerate a significant possibility of AI doom in return for the larger possibility of massive gains in human life span and wellbeing," and when it comes down to it, I guess I am too. Odds are I'll never know whether that was stupid: either I'll wake up in Utopia or I won't.

Expand full comment
author
Jul 3, 2023·edited Jul 3, 2023Author

How could Neanderthals have integrated with the new sub-species? They lacked the technological tools and infrastructure that we have to integrate with AI, functionally and/or physically.

On the Obstacles to AI Doom, I would make the converse point that AI doomers seem to see AI magically circumventing every possible obstacle simply by saying "because they will be much smarter than us". I find that unconvincing. Some obstacles will be up to us to maintain (unless we have found other ways to protect us) such as not giving AI real agency and not letting it control crucial and extensive physical facilities (weapons, energy). We can use non-AGI AI so long as it's not structured in a way that lets AGI take it over.

I do have considerable sympathy for your comments on the disjunctive probability calculations. I've continued thinking about it since I wrote this. If I were to write this today, I would have more discussion of the limitations of this approach (I'm currently working a piece discussing such calculations in a cryonics context that will appear over at the Biostasis Standard). You have to be really careful in what factors you choose and, yes, it's easy to game the approach. It probably isn't a workable approach for AI risk except to show that probabilities will look different when you break down an outcome into sub-factors and events. People tend to overestimate probabilities of disjunctive events. The better approach is probably to focus on the factors that could act as barriers or facilitators to AI doom/takeover, rather than coming up with an unreliable number based on factors that are hard or impossible to quantify.

I do understand and appreciate your support for the Proactionary Principle in other contexts. I've heard the same thing from others, including Eliezer and Zvi. This piece attempted to point out that there are massive, existential benefits likely possible due to advanced AI. From your last sentence, it doesn't look like we are that far apart. See you on the other side!

Expand full comment