Phil and Stephen discuss the chances that super-smart AI might be upon us and the risks we face even if it isn’t. Plus what do we do about the fact that when it comes to AI we often just don’t get it?
From Kevin Kelly’s article:
The assumptions behind a superhuman intelligence arising soon are:
Artificial intelligence is already getting smarter than us, at an exponential rate.
We’ll make AIs into a general purpose intelligence, like our own.
We can make human intelligence in silicon.
Intelligence can be expanded without limit.
Once we have exploding superintelligence it can solve most of our problems.
In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them
Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
Humans do not have general purpose minds, and neither will AIs.
Emulation of human thinking in other media will be constrained by cost.
Dimensions of intelligence are not infinite.
Intelligences are only one factor in progress.