Compilation Post: Artificial Intelligence
An annotated collection of recent, thoughtful pieces on AI
Photo by Maxime VALCARCE on Unsplash
Instead of a really long compilation post on several topics, I’m going to try more succinct compilations on specific topics. This one is on AI. If you’re read my previous posts, you know that I’m concerned about bad regulation of AI. Let’s not kill AI as we’ve killed or seriously maimed nuclear power, supersonic flight, and genetic engineering.
Rise of the AI Doomsday Cult
Or How I Learned to Stop Worrying and Love the AI
Daniel reinforces the point I emphasize – and that forms part of the Proactionary Principle – that we cannot iron out all the kinks in new technologies or productive processes before using them. We have to fix flaws as they develop. If you’re short on time, you can skip to the section, “Fixing Problems Outside of Fantasy Land and Finding Uses Too”.
Daniel refers to the following piece by Dan Shipper, which is the result of many hours of studying the writing and speaking of Eliezer Yudkowsky:
AI Doomsday For People Who Don’t (Yet) Wear Fedoras
The Taxman Will Eventually Come for AI, Too
Tyler Cowen: Autonomous AI agents do not enjoy leisure as humans do, so should we tax their labor at a higher rate?
One interesting question about the development of AI is whether we should tax it, or tax the corporations that create it. In this piece, economist Tyler Cowen starts off with the assumption that we tax AI labor and explores some of the issues that arise. These include the issue of “tax arbitrage” which Cowen says becomes more difficult the more closely AI is aligned with human objectives.
Rapid AI takeoff and secret sauce
Andy McKenzie shares my view that it is not at all obvious that as soon as we can build artificial general intelligence (AGI) it will rapidly become superhumanly intelligent.
In the absence of some sort of "secret sauce", which seems necessary for sharp left turns and other such scenarios, I view AI capabilities growth as likely to follow the same trends as other historical growth trends. In the case of a hypothetical AI at a human intelligence level, it would face constraints on its resources allowing it to improve, such as bandwidth, capital, skills, private knowledge, energy, space, robotic manipulation capabilities, material inputs, cooling requirements, legal and regulatory barriers, social acceptance, cybersecurity concerns, competition with humans and other AIs, and of course value maintenance concerns (i.e. it would have its own alignment problem to solve).
FOOM liability
“Foom” is shorthand for AI that rapidly improves itself, acquires agency, and causes humans havoc. Roan Hanson says: “We want policies that will give big benefits if foom risk is high, but impose low costs if foom risk is low. In that spirit, let me suggest as a compromise a particular apparently-robust policy for dealing with AI foom risk.”
Do we need a World Congress to govern AI?
Sounds like an idea for when you have no ideas
James Pethokoukis responds to a Wall Street Journal article in which Peggy Noonan “ breezily promoted the following idea for AI-ML global governance.”
Some Glimpse AGI in ChatGPT. Others Call It a Mirage
Will Knight asks: “Are LLMs an early form of AGI? Not unless you seriously weaken the definition of term.”
We Need To Talk About Deep Delving
A clever and amusing dialogue about that definitely, absolutely has nothing to do with ChatGPT and its cousins.
Attention is all you need to understand
Jon Evans is “setting myself the task of explaining transformer “attention” in plain English … with no math, no code ... and no diagrams at all. Just words.”
More problems for the pause/regulate AI people who dismiss the “what about China?” point (and this was in 2018):
Last July, China unveiled plans to become the world’s dominant power in all aspects of artificial intelligence, military and otherwise, by 2030. The U.S. now finds itself in an escalating AI arms race. Over the past two years, China has announced AI achievements that some U.S. officials fear could eclipse their own progress, at least in some military applications. “This is our Sputnik moment,” said Robert Work, the former deputy secretary of defense who oversaw the Pentagon’s move into the new field.
But the Chinese military has moved to copy the Pentagon’s model. Two years ago, the PLA elevated and reorganized its science and technology branch, aiming to turn it into a “Darpa with Chinese characteristics,” according to Tai Ming Cheung, an expert on the Chinese military at the University of California, San Diego. The Chinese government is also building national laboratories in the mold of America’s famed Los Alamos, and because of its deep involvement in industry at every level, Beijing can achieve more integration between military and civilian AI investments.
AND:
Why China has an Edge on Artificial Intelligence
The economic promise of ChatGPT and GenAI as a general purpose technology
From James Pethokoukis:
“We now have several early, early studies suggesting a significant productivity impact of large language models on worker productivity:
In an experiment with ChatGPT, it took grant writers, data analysts, and human-resource professionals 10 minutes less — a 40 percent time savings — to churn out news releases, short reports, and emails. And the quality was higher, according to MIT economists.
Another AI tool is Copilot, which helps developers solve coding problems in natural language. GitHub tested 95 developers with and without Copilot on a coding task. The ones who used Copilot completed the task faster (71 minutes vs. 161 minutes) and more accurately (78 percent vs. 70 percent). These results show how AI tools can improve worker productivity.
In the just posted NBER working paper “Generative AI at Work,” researchers Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond looked at the impact of an LLM-based chat assistant — built on an OpenAI used by 5,000 customers service agents working for a Fortune 500 software company that provides business process software. What’s super interesting here is that this study looks at the productivity impact of generative AI deployed in a real-world workplace. Here’s the headline finding:
AI assistance increases worker productivity, resulting in a 13.8 percent increase in the number of chats that an agent can successfully resolve per hour. This increase reflects shifts in three components of productivity: a decline in the time it takes for an agent to handle an individual chat, an increase in the number of chats that an agent can handle per hour (agents may handle multiple calls at once), and a small increase in the share of chats that are successfully resolved.”
Just Running ChatGPT is Costing OpenAI a Staggering Sum Every Day
Expect more charges for using it.
CEO of OpenAI Says Making Models Bigger is Already Played Out
OpenAI has not told us GPT-4’s exact size and we don’t know if it has more or fewer parameters than the 175 billion of the previous version. OpenAI in a technical report says that increases in the number of parameters may be yielding diminishing returns. LLM progress may slow way down, possibly even stop. A different approach will become prominent but who knows when.
Of the focus on the parameter count, OpenAI’s CEO says: “This reminds me a lot of the gigahertz race in chips in the 1990s and 2000s, where everybody was trying to point to a big number.” He added: “What we want to deliver to the world is the most capable and useful and safe models. We are not here to jerk ourselves off about parameter count.”
AI developers must ‘learn to dance with shackles on’ as China makes new rules in a post-ChatGPT world
AI aligned with Chinese socialist values? Yikes.