It’s hard to take a look at any of the corners of the blogosphere that I frequent without being bombarded with posts urging us to completely lose our shit over AI.
It’s a complex issue on which I don’t want to fire off a casual post, so I’m not doing that right now. When I get around to putting my thoughts down in detail, I’ll explain why I don’t think we should be panicking about AI, why some research into AI alignment is probably worthwhile but why we definitely should not attempt to stop or slow down AI research.
For now, in case you haven’t had enough of the topic, here are some essays I recommend:
https://scottaaronson.blog/?p=7064
https://reason.com/2007/09/11/will-super-smart-artificial-in/
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/?commentId=9ZhXbv8p2fr8mkXaa
https://www.technologyreview.com/2017/10/06/241837/the-seven-deadly-sins-of-ai-predictions/
https://sohl-dickstein.github.io/2023/03/09/coherence.html
I recently rebranded myself as the Geezer Boomer Doomer :-) in conversations about AI, and so would welcome friendly debate with you on this topic should that be of interest.
Briefly, my attitude is that we already face plenty of large risks, and don't need any more at this moment in time. If we could demonstrate that we can fix the big problems we've already made, then I'd be more open minded about AI.
Over to you, should you so choose...