54 Comments

Ricardo's concept of comparative advantage shows that it is likely to pay for us to trade with machines and for machines to trade with us. However if a large enough power imbalance develops, it would not be enough to save the weaker party. Eventually, the benefits of the strong eating the weak and recycling their atoms would exceed the benefits from trade - and the relationship would come to an end.

Expand full comment

Re: "AI is not shaped by natural selection in a quest to pass on genes to the next generation."

Instead it is shaped by natural and artificial selection acting on memes instead of genes. Cultural evolution favors survival much as evolution acting on organic creatures does. Much the same point was also made in the "Don't Fear the Terminator" article from 2019. It seems like a misunderstanding of cultural evolution to me. We could override survival tendencies and build suicidal machines - but nature can build suicidal bees too. The differences in this general area are much exaggerated, IMO.

Expand full comment

Hello. For the record (I'm not upset, but this may come up again) my name is spelled Zvi Mowshowitz.

I feel as if I have addressed all the concerns and arguments here at one point or another, and your position seems overdetermined such that even if I changed one of your positions you would stick to your conclusion, so I don't see how I can usefully respond here. If there is a particular response you would find valuable, I'd be happy to try and provide it.

Expand full comment

The phenomenon that is Eliezer has made this debate very polarized. I should stress that almost everyone who is concerned about AGI risk in the EA/ Rationalist space has lower estimates of doom than Eliezer. I don't know whether the result of his recent activity will be pushing opening the Overton window and allowing the median, more palatable AGI risk-concerned person to emerge into the spotlight, or whether he's polarising the debate unnecessarily.

I should also note that many people in this space have very low estimates of doom, and we're still very worried. While Eliezer may spend too much time on doom scenarios, other EA/ Rationalists spend a lot of time thinking about all the brilliant things that aligned, safe AI may bring, and the limitless future ahead of us if we manage to achieve this. I seriously think that my children could live for millennia or more, whether digitally or in some kind of hybrid form. But aligning AI is hard, we don't know how to do it, and AGI will be the most powerful superweapon ever.

Here's my psychologising about various actors here:

Eliezer's very high doom estimates are probably linked to a psychological tendency towards despair, combined with the intense technical difficulty of his own favoured approaches. He seems to neglect a few reasonable causes for doubt, I don't think he takes seriously enough the likelihood of a 'fire alarm' or 'warning shot' from very powerful misaligned systems, for example.

Among many of the generalist critics whom I generally respect (Tyler Cowen, Robin Hanson, Noah Smith), I see optimism bias and otherwise reliable heuristics malfunctioning under speculative, low probability, x-risk events. These people follow the tried and tested heuristics of 'progress is generally good' and 'overregulation tends to be bad' and 'doom tends not to happen', without really grasping the extent to which AGI *has* to be completely different from all previous technologies. There does seem to be an interesting bias towards acceleration from people of a certain age group, but I won't speculate any more on that.

I should also be self-reflective and consider my own biases here. I definitely feel inclined to see doom as more likely than most people. The logic of the singularity has always struck me as obvious and terrifying, way before I (superficially) understood the more sophisticated case for how difficult technical alignment is. I was totally on board with AGI risk being very scary the moment I saw Sam Harris' TED talk on it (I was 18 or so). I thought something like: "Creating an agent more powerful than us with the ability to self-improve exponentially is perilous", and haven't really changed my mind on this view. But when I have to give a numerical estimates, I'm way too anchored on common EA/ Rationalist estimates to feel comfortable making a nuanced claim.

I don't know who the most clear-thinking people on this topic are at the moment, I'm 100% sure it's not the tech accelerationists (even if their approach ends up working and we do somehow brute force our way into aligned AGI, I would still need a lot of persuading that it was the wise choice). My money is still on the EA/ Rationalist community to have the best epistemics on this matter.

Expand full comment

The only problem with this text is that you state that the USA are the good guys. Look at what the USA does to third world countries. In fact, ask Iraqis and Afghans if their countries are better or worse off.

Expand full comment

I'm a cryonicist, so I'm familiar with you and have respect for you, and share the same goals as you. I also agree with you regarding feeling an urgent need for AI. However, I'm very unimpressed by this post. I don't feel that you understood/know about, nor addressed the specifics of Eliezer's concerns, as laid out in his "AGI Ruin: A List of Lethalities by Eliezer Yudkowsky" and "Lex Fridman Podcast #368".

Expand full comment

Re the "costs of delay" section, you're only taking into account the short-term effects of slowing AI progress. When taking into account the long-term effects as well, it's clear that accelerating AI development at the cost of a higher probability of misalignment is virtually always net negative. Paul Christiano has a short 2014 post making this point called "On Progress and Prosperity".

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Thanks, Max. IMO, paperclips are a fairly harmless thought experiment. This article seems to be taking it a bit too literally/seriously. Perhaps consider substituting "maximizing shareholder value" - or other simplistic goal - if it makes the underlying message more palatable.

Expand full comment

Valuable article. Thank you, Max.

Expand full comment
Apr 2, 2023·edited Apr 3, 2023

"chair of the Harvard Chair of the Department of Biomedical Informatics at Harvard Medical School said: “How well does the AI perform clinically? And my answer is, I’m stunned to say: Better than many doctors I’ve observed.” -- and yet you don't believe that foreseeable AI (beyond LLMs) will be better than many AI designers? Which in turn would support the idea of Seed AI - i.e. self-improving AI/ AGI.

Expand full comment

> For some reason, AI’s are thought to be obsessed with paperclips. It’s not clear why they have no interest in binder clips, Scotch tape, or rubber bands.

I'm sure that you understand that 'paperclip maximizer' is a general term, in the same way that a 'prisoner's dilemma' need not involve prisoners.

Why do you think deliberately dishonest ridicule is a good tactic to use here?

Expand full comment

> I know of quite a few people who oppose the petition who are libertarian.

Don't you mean "aren't"?

I'm not a doomer, but humans HAVE indeed caused a great many species to go extinct, even if conservation has gotten more popular in recent history (a tiny slice of total human history). It usually wasn't a deliberate attempt to do so, but that's what the actions most convenient to us resulted in. And horses used to have a "comparative advantage" in certain tasks before getting displaced by automobiles, now we just don't have as many horses as we used to.

Expand full comment

its just lazy people who don't want to learn. Their problem.

Expand full comment

"So long as AI does not have control over factories, chip plants, server rooms, and robot factories, it will have a powerful incentive to cooperate with us." - This implies that they won't cooperate with us if they're connected? 1) They *will* be connected 2) No reason to believe that it would change their attitude towards us.

Expand full comment