AI risk
Prolific blogger Zvi Mowshowitz says:
Max More comes out against both the FHI Letter and the EY proposal, employing “relatively sensible and lengthy versions of many independent anti-doom arguments and we-can’t-do-anything-about-it-anyway arguments - nothing I haven’t seen elsewhere, but well said.” His post contains (another) long commentary on AI risk.
Measuring Trends in Artificial Intelligence
Top takeaways:
Industry races ahead of academia.
Performance saturation on traditional benchmarks.
AI is both helping and harming the environment.
The world’s best new scientist… AI?
The number of incidents concerning the misuse of AI is rapidly rising.
The demand for AI-related professional skills is increasing across virtually every American industrial sector.
For the first time in the last decade, year-over-year private investment in AI decreased.
While the proportion of companies adopting AI has plateaued, the companies that have adopted AI continue to pull ahead.
Policymaker interest in AI is on the rise.
Chinese citizens are among those who feel the most positively about AI products and services. Americans … not so much.
I’m especially interested in chapter 2 on technical performance, and the ninth section on AI for science. This looks at accelerating fusion science through learned plasma control; discovering novel algorithms for matrix manipulation with alpha tensor; designing arithmetic circuits with deep reinforcement learning; and unlocking de nova antibody design with generative AI.
Artificial intelligence has advanced despite having few resources dedicated to its development – now investments have increased substantially
https://ourworldindata.org/ai-investments
See charts showing the growth in AI investment and the supply of AI research and researchers.
An Equilibrium Perspective on AI
People always respond. Take that seriously.
Normal responses vs. bombing data centers.
It’s not uncommon to come across the assertion that only AI experts should have an opinion about the future of AI and the safety issues surrounding it. That’s a foolish idea that ignores all the ways in which AI is and will be used and integrated into our economies. Brian Albrecht says that when considering the impact of AI on humanity's future, it's important to take equilibrium and feedback seriously.
AI doesn’t just happen. People and policy respond, creating dampening feedback mechanisms to counter the escalating feedback. We hear a lot about the escalating factors and much less about the dampening factors such as companies responding to concerns to protect their reputation and market capitalization. The environmental doomsayers of the 1960s and 1970s were wrong because they ignored feedback mechanisms. Much of this echoes what I argued with Ray Kurzweil at Extro-5 in 2001 and in our online debate about the Singularity.
Debate: Artificial Intelligence Should Be Regulated
Is an A.I. "foom" even possible?
Ronald Bailey and Robin Hanson
https://reason.com/2023/04/02/proposition-artificial-intelligence-should-be-regulated
Describing this as a debate is misleading. Ronald and Robin are talking about two different things. Ronald warns us not to trust governments with A.I. facial recognition Technology while Robin presents four arguments that suggest we don't have good reasons to regulate A.I.s more now than similar human beings.
AI getting cheaper and more accessible
Many arguments in favor of regulating AI explicitly assume that there will be only a few parties to regulate because of the tremendous expense and difficulty of building and training large language models like ChatGPT. This assumes that all future AI systems will be LLMs. But the scarcity assumption is already looking implausible, as shown in these two pieces:
Let’s BLOOM with BigScience’s New AI Model | by Heiko Hotz | Towards Data Science
With these Int8 weights we can run large models that previously wouldn’t fit into our GPUs.
ChatGPT on your PC? Meta unveils new AI model that can run on a single GPU
The only way out of the AI dilemma
I’m happy to see eminent SF writer David Brin disputing the AI doomers and putting matters in perspective. As he sensibly writes, we tackle the AI challenge the same general way we have successfully tackled other challenges of power: “...by breaking up power into reciprocally competing units and inciting that competition to be positive sum.”
The risk of slowing down AI progress
James Pethokoukis on the enormous opportunity costs of slowing down AI. I recommend his blog, Faster, Please! to who wants to support the continuation of progress in the face of the forces of stagnation.
No to the AI pause
Another good piece by James Pethokoukis who wonders “how the past three years might’ve gone differently if in the late 2010s there had been on “pause” on research into a radical new vaccine technology called mRNA? Or how a 1930s pause of atomic weapons research might have meant the War in the Pacific continuing into 1946?”
James feels the same way I do. The call for a research pause comes just as AI is showing real signs of being able to accelerate technological progress and boost economic growth, making us healthier, wealthier, and more resilient.
Even on its own terms, however, I have problems with the Pause — whether or not such a delay is workable across companies and countries, including China. I fear that embedded within the Pause is the better-safe-than-sorry Precautionary Principle that will one day push for a permanent pause with humanity well short of artificial general intelligence. That, whether for concerns economic or existential, would deprive humanity of a potentially powerful tool for human flourishing.
Another fellow accelerationist:
Let’s Speed Up AI
Calls to Slow Down AI are Deeply Misguided. We Can Only Solve Problems in the Real World and to Make AI Truly Safe We've Got to Expose It to the Infinite Creativity of Humans.
Daniel Jeffries agrees that to fix the real problems with AI we have to let it develop. Only by putting it out into the real world can we fix the real problems. (We can tackle the imaginary problems if they show signs of being more than fevered fantasies.) Absurd imaginary scenarios grab all the attention instead of real AI dangers like Lethal Autonomous Weapons.
Counterarguments to the basic AI x-risk case
Although Katja Grace seems to be inclining toward signing the pointless and dangerous pause petition, she nevertheless finds many flaws in the argument for existential risk from superhuman AI systems in this piece from August 2022.
My Objections to “We’re All Going to Die with Eliezer Yudkowsky”
Eliezer Yudkowsky has a long list of reasons and arguments for his view that the odds of AI destroying the human race is close to 100%. Some of us disagree with premises that underlie many of those arguments. Also, most of us simply don’t have the time to write detailed rebuttals of every argument given. However, Quintin Pope went to the trouble of rebutting quite of few of them.
Existential risk, AI, and the inevitable turn in human history
Economist Tyler Cowen calls for a bit of humility in our projections of the AI-transformed future. Too many people seem far too sure about outcomes in a situation where radical uncertainty exist. He also notes that, combined with new international uncertainties about the role of America, and AI revolution may bring back “moving history”, something most of us are not used to.
Extremely popular and prolific blogger Scott Alexander jumps on Cowen and accuses him of the “safe uncertainty fallacy”. He characterizes this as: The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go. Therefore, it’ll be fine. This seems to me a bad and straw-manned misrepresentation of Cowen’s piece. But judge for yourself.
Is AI Fear this Century’s Overpopulation Scare?
Archie McKenzie asks: “Is Yudkowsky the False Prophet Ehrlich was?” He points out the destructive response to overpopulation doomerism and warns against making a similar mistake with AI.
The GPT-x Revolution in Medicine
Eric Topol reviews a book on how AI can transform healthcare. The authors of the book had 6 months to assess GPT-4 for medical applications. “How well does the AI perform clinically? And my answer is, I’m stunned to say: Better than many doctors I’ve observed.”—Isaac Kohane MD
Kohane is also the co-author of the forthcoming book, The AI Revolution in Medicine: GPT-4 and Beyond.
We explore opportunities in personalized medicine, digital clinical trials, remote monitoring and care, pandemic surveillance, digital twin technology and virtual health assistants. Further, we survey the data, modeling and privacy challenges that must be overcome to realize the full potential of multimodal artificial intelligence in health.
Leroy Hood and Nathan Price
As medical research produces ever more data on health and disease, doctors are turning to artificial intelligence to help them make the best decisions for patients.
An AI program called MedAware, for example, helps doctors avoid accidentally prescribing the wrong medication.
Clinical decision support systems also help make test results more personalized,
A study showed that urinary bladder tumor analysis could be performed with AI at an accuracy rate of 93%.
ChatGPT-4 gets an A
Economist Bryan Caplan gave GPT-4 his economics midterm exam for Economics 309: Economic Problems and Public Policies. The AI earned a grade A. This was a massive and surprising improvement from Caplan’s Fall 2022 test on which ChatGPT got a D.
An important day for AI is in your future
Everyone is discussing the economic effects of AI. Some of these are major and could take a lot of work to adapt to. Others are significant only to a few entrepreneurs. It today’s Wall Street Journal, we learn that Mr. Chan, the 53-year-old co-owner of the Golden Gate Fortune Cookie Factory in San Francisco, says computers writing cookie fortunes “is a sign that society is moving too fast.” Other fortune cookie creator are optimistic about working with AI to improve their out.
Paul Krugman, wrong again
Economist Paul Krugman just said LLMs will have negligible effect on the economy over the next decade. This is essentially the same thing he said about the internet:
“Large language models in their current form shouldn’t affect economic projections for next year and probably shouldn’t have a large effect on economic projections for the next decade.”
A comic take on what’s left to humans after AI.
Jason Crawford on how we better at adapting
Climate and energy
Trends in the Proportion of Major Hurricanes
The data behind a major blunder by the IPCC
Roger Pielke, a leader in the climate field, finds serious problems in the IPCC’s latest report on major hurricanes. The IPCC writers surely know that you can show the trend you want by picking specific start years. Pielke says:
Well, here is a cherry picker’s guide to proportion of major hurricanes:
Want to show an increase? Start your analysis in 1980
Want to show no trends? Start your analysis in 1950
Want to show a decrease? Start your analysis in 2002
The Case For Nukes, by Robert Zubrin
Robert Zubrin’s new book argues in favor of nuclear power. See also a good interview with him here.
interview here:
Rapidly declining cost of cultivated meat
Will global warming make temperature less deadly?
“Both heat and cold can kill. But cold is far more deadly. For every death linked to heat, nine are tied to cold.” The simple story that global warming will cause more deaths from heat ignores the rather crucial point that it will also reduce the deaths from cold. And there are far more of these. Harry Stevens writes about a recently published report on the topic. Note that the study uses the 4.5 scenario, which isn’t as crazy as the media-favorite 8.5 but is still likely too high. The full paper is here.
For 800 Years Wheat Has Been Growing More Abundant As Population Increased
Malthus had it backwards. More people make food much more abundant.
Italy needs life extension!
Unless Italy reduces its net outward migration, the population will shrink, leading to fewer working age Italians supporting more retirees. Today’s report from Istat, Italy’s Bureau of Statistics, says that just 393,000 babies were born in 2022, down some 1.8 percent from the 400,249 born in 2021. This the first time births have been below 400,000 since the unification of Italy in 1861. The average age of the population has again increased, from 45.7 years. Other European countries face similar demographic challenges, although Italy’s is the most severe.
I wish more “liberals” were like Mike Hind: