Here I will share some quick notes I've been making on Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI). I have been collecting these for a few months. Better put them out there now before they are proved certainly wrong (or maybe eventually proved right).
A good starting point for those who want the primer is Tim Urban’s posts, The Artificial Intelligence Revolution parts 1 & 2, from way back when in 2015. Obviously there has been plenty of additional digital ink spilled in this area from thinkers undoubtedly more intelligent in general and greatly more knowledgeable in this particular realm than myself.
In the points below you’ll notice my skepticism about where this thing can get to and how soon as well as my overall optimism creeping in. My apologies for how disjointed and redundant these are.
I agree with Arnold Kling that the term AI is an unhelpful misnomer:
I wish that these software tools were not branded as artificial intelligence. That term leads people to anthropomorphize the software, creating a set of expectations to which it cannot measure up. You compare it to a human, and you say “Look what it can’t do. See how messed up it is.” Instead, treat ChatGPT like the World Wide Web in the early days, when it, too, was primitive and buggy, and a lot of people dismissed it. I can recall that when I set up my web site, another Internet entrepreneur said scornfully that I should have used Gopher, a text-based Internet tool that turned out to be obsolete within months of when he gave me that advice.
Wisdom versus knowledge versus processing data - nobody thinks the world’s most powerful computers today much less our otherwise very powerful calculators are better mathematicians than any mathematician living or dead. We would be hard pressed to argue the machines are better mathematicians than any math-competent child. You don’t have to be Scott Flansburg to claim you can “out math” a computer. And this is in the area machines have all the advantages. This point is simply there is more to intelligence than intelligence if you will—I think that was the point of Good Will Hunting.
Can an AI reach general or super levels without some deeper understanding beyond maximize the protocol?
Is the worry really just super smart and ambitious and thus capable and powerful without a corresponding understanding of trade offs and values?
How does AI develop such that to reach a certain goal the AI deceptively works to achieve it or strive toward it? How does it get this part of a leap to another level of intelligence but not question the goal itself or develop any other counter thoughts? Why is it asymmetrical to the downside?
How does intelligence actually scale? Is the water in a pool analogy accurate (from Tim Urban’s post)? Does the pool expand at an increasing and non-constant rate? Is there a misunderstanding assuming linear development along one dimension whereas there is actually a jump between realms? Could an animal even be as smart as a human just by adding neurons? Might there be some other meaningful dimension required for that jump that lies orthogonal to simple computational power? One need not necessarily subscribe to a theory of the soul or ghost in the machine to think there is more to it than just brain size implies IQ implies reasoning/wisdom capabilities.
Some of the true geniuses of The Manhattan Project actually feared that an atomic detonation might set the entire Earth’s atmosphere on fire. We might scoff at this worry today, but that is because with hindsight we know this didn’t (couldn’t?) happen. Might AI risk be similar in that the worry of AGI/ASI is simply a misunderstanding of the real riskiness and natural limitations. Obviously the analogy likely works on the adjacent level of how dangerous nuclear weapons and AI could be in the wrong hands—perhaps any hands.
Is the cost/benefit tradeoff for developing AI highly favorable to development because either we stop short of ASI or even AGI by design (Eliezer Yudkowsky would laugh at this) or the development effort itself cannot overcome the hurdles?
I would like some rules to govern AI and AI development such as:
Work to the benefit of humanity.
When in doubt, slow down.
Work in test environments and don’t “go live” without permissions granted and still in reversible ways.
Train AI to aim for “doing the right thing even when no one is watching…” aka, be good for goodness sake
… yes, yes, I know all of these type of rules have been theoretically shot down hundreds of times. But so too have the Golden Rule, religious rules and commandments, law, legislation, norms, etc. They have all still proven helpful and resilient—things that work in practice but not in theory it would seem.
Finally, what is the right metaphor for AI?
Nuclear weapons
The Industrial Revolution
Calculators for cavemen
Pet Rocks
—Added 2/20: Yes, I am quite aware of Asimov’s Three Laws of Robotics and the associated arguments for their futility.