Last month there was an interview with Eliezer Yudkowsky, the rationalist philosopher and successful Harry Potter fanfic writer who heads the world’s foremost research outfit dedicated to figuring out ways in which a future runaway computer superintelligence could be made to refrain from murdering us all.
It’s really pretty interestingl. It contains a nice explication of Bayes, what Eliezer would do if he were to be World Dictator, his thoughts on the Singularity, justification of immortality, and thoughts on how to balance mosquito nets against the risk of genocidal Skynet from an Effective Altruism perspective.
That said, the reason I am making a separate post for this is that here at last Yudkowsky gives a more more or less concrete definition of what conditions a superintelligence “explosion” would have to satisfy in order to be considered as such:
Suppose we get to the point where there’s an AI smart enough to do the same kind of work that humans do in making the AI smarter; it can tweak itself, it can do computer science, it can invent new algorithms. It can self-improve. What happens after that — does it become even smarter, see even more improvements, and rapidly gain capability up to some very high limit? Or does nothing much exciting happen?
It could be that, (A), self-improvements of size δ tend to make the AI sufficiently smarter that it can go back and find new potential self-improvements of size k ⋅ δ and that k is greater than one, and this continues for a sufficiently extended regime that there’s a rapid cascade of self-improvements leading up to superintelligence; what I. J. Good called the intelligence explosion. Or it could be that, (B), k is less than one or that all regimes like this are small and don’t lead up to superintelligence, or that superintelligence is impossible, and you get a fizzle instead of an explosion. Which is true, A or B? If you actually built an AI at some particular level of intelligence and it actually tried to do that, something would actually happen out there in the empirical real world, and that event would be determined by background facts about the landscape of algorithms and attainable improvements.
You can’t get solid information about that event by psychoanalyzing people. It’s exactly the sort of thing that Bayes’s Theorem tells us is the equivalent of trying to run a car without fuel. Some people will be escapist regardless of the true values on the hidden variables of computer science, so observing some people being escapist isn’t strong evidence, even if it might make you feel like you want to disaffiliate with a belief or something.
Psychoanalyzing people might not be so useful, but trying to understand the relationship between cognitive capacity and technological progress is another matter.
I am fairly sure that k<1 for the banal reason that more advanced technologies need exponentially more and more cognitive capacity – intelligence, IQ – to develop. Critically, there is no reason this wouldn’t apply to cognitive-enhancing technologies themselves. In fact, it would be extremely strange – and extremely dangerous, admittedly – if this consistent pattern in the history of science ceased to hold. (In other words, this is merely an extension of Apollo’s Ascent theory. Technological progress invariably gets harder as you climb up the tech tree, which works against sustained runaway dynamics).
Any putative superintelligence, to continue making breakthoughs at an increasing rate, would have to not only solve ever harder problems as part of the process of constantly upgrading itself but to also create and/or “enslave” an exponentially increasing amount of computing power and task it to the near exclusive goal of improving itself and prevent rival superintelligences from copying its advances in what will surely be a far more integrated noosphere by 2050 or 2100 or if/whenever this scenario happens. I just don’t find it very plausible our malevolent superintelligence will be able to fulfill all of those conditions. Though admittedly, if this theory is wrong, then there will be nobody left to point it out anyway.
[more advanced technologies need exponentially more and more cognitive capacity – intelligence, IQ – to develop]
Do you have any independent evidence of this, though? How hard things are for you to understand now doesn’t necessarily correlate with how hard they were the first time anyone understood them.
Proving the Poincare Conjecture in the 2000s is harder than proving the Pythagoras Theorem in Ancient Greece. That’s common sense more than anything.
But if you really had to formalize it you would make the observation that proving PC certainly involves adhering to a massively larger amount of rules than in proving the PT, and couple it with:
Proving the Poincaré Conjecture doesn’t involve adhering to nearly as many rules as chanting from the Rig Veda correctly, so I see problems with your formalisation. Nor do I share your conception of common sense.
Gibberish.
Hi,
I agree that in the long term research should get harder and harder, I think that in the short term a superintelligence will have a lot of low hanging fruit, relative to itself, to exploit to increase its intelligence 10 to 100 fold over what the designers originally intended. Humans, myself included, are bad programmers and in a large project inefficiencies are bound to build up. A smarter than the team that built it AI should be able to go over its own code and improve dramatically.
And I never understood why An AI needs to have infinite intelligence in order to be an existential threat to humanity. Skynet despite holding a massive idiot ball manages to kill most of humanity in a plausible way. If ELOPe had become any of the other possible mind spaces than the friendly AI in book 2, then humanity would have been screwed.
Side note, for an AI to gain control of all humanities computers is pretty easy, write a better OS than windows with a Cortana/Siri/Google Now replaced with the AI, and offer it for free. Pitch to the CEO of the AI’s company about the Orwellian possibilities for information control and advertising, win.
On further reflection a pretending to be friendly AI would be the most dangerous, without needing to be dramatically smarter than humans. An evil Celestia AI would upload humanity and then delete it and turn the rest of the universe into ponies and paperclips.
A more advanced world would simply mean a harder initial challenge for the AI, as it needs to outsmart its cyborg creators. Once it does that however, the remaining takeover is much easier as all the infrastructure, robot work bodies, are in place.
As for rival superintelligence, if they were from competing nuclear nation states, I don’t think humanity has a good outcome. But there is also an assumption there that superintelligences would automatically become rivals and choose defect, instead of cooperation, at least until humanity is subjugated or destroyed anyway.
An AI does not even need to be technically more intelligent than a human in order to be practically much more intelligent.
I am talking about speed, which is often overlooked in these debates in favor of theories about qualitatively higher intelligence that we would not be able to even understand.
An AI on the level of a very intelligent (but not even necessarily a genius) human that can think a thousand times faster can still be a major threat.
If you can spend 1000 years to think of a plan to take over the world provided that the world does not change significantly while you are thinking over all the details of your conquest, you will probably be able to do it as long as you have decent intelligence.
An AI with the same intelligence as you and with the computing power of 1000 brains like yours can produce the same plan in 1 year instead of 1000 years.
The ability to store and recall memory much more efficiently and to think faster can actually be a major advantage of an AI over humans, and it should be enough for a takeover if it decides to do it. A faster mind with no memory constraints means a productivity boost that will lead to superhuman achievements of intelligence even with a human IQ.
Chanting the vedas is something that can be taught. This is quite different to discovering a proof.
There’s way too much paranoia and projection concerning this subject. It’s not likely AI will be burdened by millions of years of monkey social evolution so why speculate about its behavior being like man’s? Making gods manlike was simple-minded, so is believing that AI will be manlike.
The only group that I fear is the psychopaths in government attempting to use it as a weapon, in which the problem will once again be, yes, government.
I am not worried about hyper-smart AI. I am worried what a bit smarter than us AI could do to us. Such thing is much more plausible.