5 Comments

Hey Max,

In a rare case of algorithms getting something right, Notes threw this piece up at me and I've decided - with some trepidation - to comment. The cause of this nervousness is simply that the other Max typically encounters my comments as a fly in his primordial soup, and I rather suspect you will too. Nonetheless, here I am! And let me say, I am commenting in part out of respect to your anarchist roots, since I usually don't want to talk to people working in your general space for reasons that will become clear. But Michael Moorcock has been central to my philosophical development, and as such I have great respect for anyone grappling with the problems of anarchy. It calls me and I resist; it is practically my life's story.

Prologue aside, I find this entire essay akin to a stone that skims across the water and never goes in - there's a joy to that, but it misses the satisfying 'plop'. Let me preface any further remarks by saying that my background is in AI - my Masters degree was in this field, although by PhD I had veered into philosophy, where I remain, and remain (as I like to say) an 'outsider philosopher'. I say this not to puff up my feathers, but to explain where I am coming from as I feel it relevant to understanding what I have to say that I am a heretic who fled the clerisy.

On your downplaying of existential risks, I concur - sort of. I mean, the existential risks are more absurd than you seem to think, but you are wise to deflate in this regard. Current AI is still built on the same bag of tricks I studied in the 90s, the sole differences are the improvements in computing power, the availability of vast quantities of human data to train the neural networks, and the plumbing to get more variety out the other side. I remain, as I did then, woefully under impressed with this fuss over AI. We certainly may destroy ourselves with our technology (in a sense, we already have, but let's leave that tangent aside) but there is no risk here greater than the one we already shouldered with nuclear weapons, and the illusion that there is depends upon overhyping the difference between (actual) AI systems and (imaginary) 'Artificial General Intelligence', or 'super-robots'.

The true risk with current AI is not existential at all, at least in the sense of extinction. It is political and social. We already exist on the edge of political catastrophe, or perhaps have fallen off that edge, and the transplanting of discourse into social media represents one of the greatest political threats of our time. The danger of AI is in turbo-charging censorship even further than it has already been driven by the US federal agencies such as the CISA (the absurdly titled Cybersecurity Infrastructure Security Agency - because one 'security' is never enough when you are playing in the totalitarian toolbox!). AI already successfully silences what is unwelcome by the ruling powers, which has put the sciences deep into the state of pseudoscience where the engine of validation fails and all that is left is commercial opportunism. This is the most plausible risk of AI: the entire control of social narrative in a world where all discourse is online, along with the termination of scientific practice.

Hence, indeed, calls for regulation. I don't know how much of this is naivety (the great houses and guilds of today have no shortage of it) and how much of this is gentle pushing by enthusiastic spooks, but inevitably (as you seem to correctly intuit) we're hearing calls for regulation that amount to monopolising AI for State and commercial benefits - primarily censorship. The good news in this regard is that this cannot entirely succeed for the same reason that 'the black library' (torrent file-sharing) could be pressured but not eliminated (good news for cypherpunks, too!). The trouble with technological regulation right now, anyway, is that there is no framework by which it can be pursued, because none of the contemporary empires have any interest in co-operation. So talk of regulation can only be calls for building new monopolies on old monopolies. Hard pass.

The strange turn in your thinking - which is not strange to you, because of your faith in technology I suspect - is that you suggest that what AI can bring us is the toolkit to extend life, which you view as eucatastrophic in the non-Tolkien sense. But life extension is merely catastrophic, there's nothing good here except the fantasy. I doubt I will persuade you, since you are committed to this path, but what could be worse at this point than extending the lives of the wealthy (the proximate consequence of such tech) except perhaps extending the lives of everyone...? If you wouldn't call for raising the birth rate by an order of magnitude, you cannot reasonably support collapsing the death rate by an order of magnitude either. Maybe you've already written on this - if so, a link is welcome rebuttal!

Everyone who inhabits this fictionality surrounding life extension has to wobble around the trade off i.e. that life extension must come at the sacrifice of births - and what do you think the regulation of this would look like...? I am unconvinced by the 'I'd gladly trade longer life for no kids' crowd, because I see a nest of vipers along this path if it works on the honour system and something far worth if governments get their claws into it. Honestly, I don't quite understand why death has got such a bad rep in the technocratic paradigm except that we traded the old mythologies for new ones and never gained the insight into them that the mid-twentieth century philosophers foolishly believed we had gained. We didn't. Perhaps we never will.

The good news (for me, for your side it is bad news I'm afraid) is that the problems required to solve in extending life are not ones that can be addressed solely by throwing AI into the gap. Just as the 'mapping of the human genome' was an excellent boondoggle for those selling gene sequencers with next to zero benefits to anyone else, AI can do zip with any amount of genetic data, because we haven't even come close to building a 'genetic computer'. Genetic 'code' is not analogous to computer code at all (except in the metaphorical sense of 'hacking', which is accurate to what we can do with it). Most likely way of extending life expectancy within the next few centuries is creating cold-blooded humans (literally, we already have the metaphorical kind), but this would have to be inflicted at birth, and would come with the lifestyle modifications associated, and quite possibly a massive step down in intellectual prowess (as if we weren't already dealing with this...).

So alas, here I am, most likely as a gadfly where I am not welcome. Evidently, I am a technological deflationist - no-one is more surprised at this than I, I assure you! I gorged on sci-fi in my youth and one does not take a Masters in AI to pop balloons. But once you've been to the factory, you are no longer tempted to drink the Koolade because you've seen what it is made from (which is literally true, by the way, as well as being metaphorically true).

Because this feels like the right thing to do in conclusion, let me leave you with a link for one of June's Stranger Worlds. These are only 750 words (3-minute read), and thus shorter than this comment(!). It captures rather well how I feel about both AI and imaginary super-AI:

https://strangerworlds.substack.com/p/laws-of-robotics

It is also, I suppose, an invitation.

With unlimited love and respect,

Chris.

Expand full comment

I do hope you are correct: I suspect strong AI is a prerequisite to my being successfully revived from cryonic suspension. I firmly agree with everything you say about the benefits that will accrue if we can just get it right, and the hugeness of the missed opportunity if we decide the risk is too great but are mistaken.

But S-curves notwithstanding, I see no reason to believe that human-level intelligence is as high as you can go. And you can ask the Neanderthals or Homo Erectus how well it works to have somebody around who is smarter. Or you could, if they were still here. So I’m gonna stay worried.

Expand full comment