Discussion about this post

User's avatar
Rudi Hoffman's avatar

Some good points about AI and SAI development and regulation from Max. Ambivalence exists at every level of intellectual growth about how to optimally deal with AGI. Eliezer Yudkowski has been obsessively "handwaving" over this for over 25 years, but that doesn't mean he is correct in calling for a virtually impossible to police international ban on anything. We have a president flouting pretty much every norm, social, and legal contract, and we as a country can't seem to control him. So stopping SAI seems unlikely, but the authoritarian steps to try could be truly awful. It is helpful to recall the doomsaying about Y2K, and other threats. Maybe we'll get through this "great filter" and matriculate to a Universe spreading civilization.

No posts

Ready for more?