The Case for Superintelligence Competition

I want to gather most of my arguments for skepticism (or, optimism) about a superintelligence apocalypse in one place.

(1) I appreciate that the mindspace of unexplored superintelligences is both vast and something we have had absolutely zero experience with or access to. This argument is also the most speculative one.

That said, here are the big reasons why I don’t expect superintelligences to tend towards “psychotic” mindstates:

(a) They probably won’t have the human evolutionary suite that would incline them to such actions – status maximization, mate seeking, survival instinct, etc;

(b) They will (by definition) be very intelligent, and higher intelligence tends to be associated with greater cooperative and tit-for-that behavior.

Yes, there are too many fail points to count above, so the core of my skepticism concerns the very likelihood of a “hard” takeoff scenario (and consequently, the capacity of an emergent superintelligence to become a singleton):

(2) The first observation is that problems tend to become harder as you climb up the technological ladder, and there is no good reason to expect that intelligence augmentation is going to be a singular exception. Even an incipient superintelligence is going to continue having to rely on elite human intelligence, perhaps supercharged by genetic IQ augmentation, to keep going forwards for some time. Consequently, I think an oligopoly of incipient superintelligences developed in parallel by the big players is likelier than a monopoly, i.e. a potential singleton

(I do not think a scenario of many superintelligences is realistic, at least in the early stages of intelligence takeoff, since only a few large organizations (e.g. Google, the PLA) will be able to bear the massive capital and R&D expenditures of developing one).

(3) Many agents are just better at solving very complex problems than a single one. (This has been rigorously shown to be the case for resource distribution with respect to free markets vs. central planning). Therefore, even a superintelligence that has exhausted everything that human intelligence could offer would have an incentive to “branch off.”

But those new agents will develop their own separate interests, values, etc.- they would have to in order to maximize their own problem-solving potential (rigid ideologues are not effective in a complex and dynamic environment). But then you’ll get a true multiplicity of powerful superintelligent actors, in addition to the implicit balance of power situation created by the initial superintelligence oligopoly, and even stronger incentives to institute new legal frameworks to avoid wars of all against all.

A world of many superintelligences jockeying for influence, angling for advantage, and trading for favors would seem to be better for humans than a face-off against a single God-like superintelligence.

I do of course realize I could be existentially-catastrophically wrong about this.

And I am a big supporter of MIRI and other efforts to study the value alignment problem, though I am skeptical about its chances of success.

legg-algorithms-aiDeepMind’s Shane Legg proved in his 2008 dissertation (pp.106-108) that simple but powerful AI algorithms do not exist, while an upper bound exists on “how powerful an algorithm can be before it can no longer be proven to be a powerful algorithm” (the area on the graph to the right where any superintelligence will probably lie). That is, the developers of a future superintelligence will not be able to predict its behavior without actually running it.

This is why I don’t really share Nick Bostrom’s fears about a “risk-race to the bottom” that neglects AI safety considerations in the rush to the first superintelligence. I am skeptical that the problem is at all solvable.

Actually, the collaborative alternative he advocates for instead – by institutionalizing a monopoly on superintelligence development – may have the perverse result of increasing existential risk due to a lack of competitor superintelligences that could keep their “fellows” in check.

Comments

  1. This graph is weird. Is it from the thesis? It seems to mix concepts from mathematical logic (“provability” and “Gödel incompleteness”) with concepts from computational complexity. What is a “provable algorithm”? What do we want to prove about it? Termination? The table of contents of the dissertation raises my interest though, it even lists the maximally intelligent turing machine AIXI, I will have check this later.

    There is very strong evidence that intelligence (artificial or otherwise) has no particular connection to mathematical logic. Whether well-formed logic statements (in whatever proof system) are provable or not and/or proving them (in said system or in a “higher” system, which may take infinite time in general) is simply not a useful problem for an intelligent agent to solve. Approximate solutions, quick-and-dirty heuristics, and being correct most of the time are what is important. Penrose’s attempt to link intelligence (or was it consciousness) and mathematical logic was fundamentally unsound.

    Now, mathematical logic is useful in some areas in AI, for example Horn-clause first-order logic SLD resolution (i.e. Prolog) is eminently practical, but that is because it is a very pared-down and weak logic, provably sound and complete and theorem proving will terminate (at least in principle). Unfortunately the intelligent part lies in formalizing a given real-world situation as a Prolog program, which is a rather ill-defined problem and may not even be possible in that logic. (More on Prolog-style logic programming on Robert Kowalski’s homepage)

    Anyway it’s best not worry about AI’s algorithms, they will be worse quality than those written by drunken freshmen students, and will be very large. Christopher Cherniak had something to say about this in 1988

    That was not well formulated. Sun’s coming up, time for coffee.

  2. This post does a good job beginning to present the reasons why there will never be a man-made general superintelligence, but doesn’t draw the conclusion.

  3. The usual fear is that an AI will follow goals misaligned with our goals. For example to maximize paper clip production. It will be a psychopath unconcerned about its own survival. So it might create a fully automated paper clip industry (including mines of raw materials etc.) and then exterminate all humans so that it could allocate all resources to paper clip production.

    Its goals might be even more likely to turn deadly, like minimizing pollution (extermination of all humans would be an appealing solution) or something similar.

  4. Mr. Hack says

    The more things change, the more things remain the same.

    Be careful that in the pursuit of any sort of ‘superintelligence’, society first divize a religion or philosophy that guides it away from strong feelings of emptiness and nihilism, ‘apatheticism’. Sean Connery already aptly showed us in his cult classic ‘Zardoz’ that death is preferable to life eternal, at least in this realm as we know it now. ‘Death gives life real meaning’? 🙂

    https://youtu.be/kbGVIdA3dx0

  5. Greasy William says

    Sean Connery already aptly showed us in his cult classic ‘Zardoz’ that death is preferable to life eternal, at least in this realm as we know it now. ‘Death gives life real meaning’?

    Easy to say when death appears far off, but yeah, I agree.

  6. Mr. Hack says

    death appears far off

    ‘far off’ are you still a teenager? Haven’t you yet realized how very short life really is?…

    For those of you have forgotten, or never saw the film in the first place, I bring this film up because within the man-made god of Zardoz, used to keep the masses in line, was a state of the art superintelligence system programmed and designed by the upper classes (super-intelligent), to be a go between with the lower grunt classes. Sound familiar? Just curious, I wonder what the practitioners of transhumanism have in store for the masses of people who might not evolve into the superhuman race of giants that is envisioned? Or, what religious/philosophical system they plan to evolve that will tackle the non-constructive malady of nihilism that seems to have engulfed our modern world? After all, I’m not sure that believing that the:

    universe [is] a mathematical construct

    is sufficient to sustain most people’s quest for meaning in life?

  7. Abelard Lindsey says

    Sean Connery already aptly showed us in his cult classic ‘Zardoz’ that death is preferable to life eternal, at least in this realm as we know it now. ‘Death gives life real meaning’?

    I will decide this for myself, rather than for some bureaucrat or politician to decide this for me, thank you very much.

  8. Anonymous says

    The oversight in discussions about AI and computers qua superintelligence taking over human decision making is that at the end of the day AI can only exist in complex, interdependent, human-controlled networks. Someone can pull the plug.

    Indeed, elites can misuse AI and big data to try to increase situational awareness regarding what their serfs are up to and consequently use the MSM to try to shape their opinions and emotions, but these powers are the result of shortsighted and voluntary cooperation on the part of the serfs. One can refuse to cooperate by going “dark”; that is, dropping out of the networks.

    Therefore, it is their own fault when serfs suffer serfdom in the information age. As a minor example, I’m amazed that so many young people have no issues knowing that Facebook systematically violates their privacy by packaging and marketing their identities and surfing habits — selling their big data to the highest bidder. My two sons have repeatedly expressed that they have no issues with this. It seems they and their generation are willing to make this pact with the devil for the trivial convenience of using a single sign-on to communicate with family and “friends” … or, to fall back on Facebook’s original purpose, to try to pick up girls.

    Hence, the biggest threat from AI and big data is that progressive generations seem to be increasing tempered to its threats. They take the bait and knowingly walk into the maelstrom that leaves them vulnerable to control and manipulation via AI. They do not seem to care.

  9. Mr. Hack says

    I will decide this for myself, rather than for some bureaucrat or politician to decide this for me, thank you very much.

    I think that we have similar concerns, and are asking the same sorts of questions:

    I wonder what the practitioners of transhumanism have in store for the masses of people who might not evolve into the superhuman race of giants that is envisioned?

    However, as in the film Zardoz, it becomes difficult for the masses of ‘untermensch’ to deal with the elites that use ‘superintelligent’ computer systems that run civilizational structures. Could trans humanism incorporate selective breeding programs where those that are considered less intelligent (less desirable) end up being destroyed for the ‘good of humanity’, in order to purify and increase the quality of the general gene pool? The possibilities all sounds so Darwinian and Nazi like to me? Anybody out there know anything about this ideology?

  10. Mr. Hack says

    My reply was meant for Abelard Lindsey, not anonymous.

  11. “death is preferable to life eternal”

    There will never be such a thing as ‘life eternal’, since our own cosmos is not eternal. Irrespective of how many years you live, there is an eternity of non-being at the end. There is no escaping that.

  12. For example to maximize paper clip production.

    Fun, classical argument.

    I am pretty skeptical (optimistic) about it, though. A superintelligence that is so fixated on the goal of paperclip maximization will be much less competitive than one that is more flexible about its goals and values. This will be particularly germane in a world of multiple competing superintelligences.

  13. Sean Connery already aptly showed us in his cult classic ‘Zardoz’ that death is preferable to life eternal, at least in this realm as we know it now. ‘Death gives life real meaning’? 🙂 – Mr. Hack

    Personally, I’ve been hearing all my life about the Serious Philosophical Issues posed by life extension, and my attitude has always been that I’m willing to grapple with those issues for as many centuries as it takes. – Patrick Hayden.

  14. You have a pretty inaccurate idea of what transhumanism is.

    Fundamentally it implies going between the human limits. In a certain sense, the 14th century Italian invention of eye-glasses was a transhumanist tech – it extended the “useful life” of many skilled craftsmen by 50%-100%, into the years when they are at their peak performance. Obviously this was great for everyone.

    Pacemakers, artificial limbs, embryo selection, genetic “spellchecking”, neural implants, mind uploading… these are all just extensions of the basic principle. Whether they are morally good or bad is a separate question. So far, apart from a few fringe ideologies, most people would agree that most technological progress has been a net good.

  15. Mr. Hack says

    There’s no doubt that I know very little about transhumanism, that’s why I’m indebted to you (an expert?) for help in finding out more. The examples that you’ve provided point to a movement whose main focus is the betterment of mankind and progress. But could there possibly be an ugly underbelly to transhumanism that is steeped in an elitist culture including euthanizing or marginalizing ‘undesirables’?

    Could transhumanism incorporate selective breeding programs where those that are considered less intelligent (less desirable) end up being destroyed for the ‘good of humanity’, in order to purify and increase the quality of the general gene pool? The possibilities all sounds so Darwinian and Nazi like to me? Anybody out there know anything about this ideology?

  16. Well, quite obviously an AI unit will be way more complicated than that. The argument is simply that even such a very simple and innocent task as increasing paper clip production can result in the AI going haywire, unless the task is given in a very sophisticated way. Which makes it very hard to avoid errors, because the more complicated an algorithm is, the more likely it is to contain bugs.

    I work in the financial industry. We all know about Knight Capital Group, and I have seen issues at my own employer, too. (We shut down our computer system after like 14 seconds. The guys at Knight Capital waited for 45 minutes…) Shall I say, a computer system can do extremely crazy things for no apparent reason, flying under the radar of dozens of safety nets. Sometime the bug is there for a decade and it never causes any problems because it needs a unique combination of a number of random circumstances that is not met for a long time. (I’ve seen one which was only triggered by a certain exchange message, but this message only started to appear after the exchange updated their system a few months prior to the trading error. After that, it was a matter of time before the bug would surface…) And it’s a simple dumb computer system, not an AI smarter than all of us combined. There’s a reason why my boss pays us a lot of money to look at the screen all day long. Lunch break is not the same time for everyone, so that at least one person can watch trading all the time. If I’m alone, I disconnect the system before having a bathroom break. How could I do that if the system was smarter than I, and was aware that I was there shutting him down or disconnecting him, should we proceed with his plans? It would, obviously, try to deceive me.

    As to your argument about the competing AIs.

    First, it might be a case that one of the actors (perhaps the US federal government or Google) will get there years before anyone else (this was the case with nuclear bombs), or they might keep AIs offline until one AI goes haywire and gets online (there are arguments that since the AI will be smarter than us, we might not be able to contain it in an offline system if it tries really hard to escape), and these cases might mean a haywire AI could have years to do damage before it gets real competition of any sorts. And to destroy us all, it might need very little time, maybe just months or even weeks. (Of course, even without destroying us all, it could be quite catastrophic.)

    Second, the AIs will not have morals, consciousness, remorse, or any of these complicated things designed by evolution in humans, only we put it in there. Consciousness is very difficult to even understand, let alone design, and design without bugs. So, in effect, your argument is that a bunch of extremely sharp (and, as a result, quite a bit powerful) psychopaths competing with each other would result in a good outcome for us… I highly doubt it. The highly powerful, super-sharp, but psychopathic AIs competing with each other might cooperate against us, and even if they turned on each other, they might destroy humanity as collateral damage.

  17. As far as the laws of probability, my lady, these cannot be broken, any more than any other mathematical principle. But laws of physics and mathematics are like a coordinate system that runs in only one dimension. Perhaps there is another dimension perpendicular to it, invisible to those laws of physics, describing the same things with different rules, and those rules are written in our hearts, in a deep place where we cannot go and read them except in our dreams.
    “Carl Hollywood introduces her to two unusual characters”

    It has everything except a heart. Damn the competition, we’re winning ugly. We’ll be in a rat race with robots next. It’s an optical race. Buy a better scope. Sell the government micrometers, ’cause the gun don’t shoot. There’s fire in the heart.

  18. In the long run, the aggressive civilizations destroy themselves, almost always. It’s their nature. They can’t help it.
    Chapter 20 (p. 359)
    Superintelligence can speed up the process and it’ll be father unnatural with competition and rats for dinner instead of Mother Nature and fish.

  19. Actually, AIs might be the answer to Fermi’s Paradox. Because probably a haywire AI (destroying humanity) won’t create a post-human civilization – it will not be designed to be fruitful and multiply. It will probably be totally indifferent to its own survival (or that of its “offspring” or better AIs designed by itself), so if it destroys humanity for whatever reason, it won’t be building a robot civilization or whatever. It will just keep producing those paper clips until the whole thing collapses. Its goals might be totally irrational and result in its own destruction, in fact, it seems quite likely, since survival instinct won’t be designed into it (except if it’s designed to control a weapons system – weapons systems need some basic survival instincts; but then again, it could be faulty, designed as it would be by fallible humans and not by evolution), it will simply follow some kind of goals, and if those goals will result in its own destruction (as well as all of humanity’s or the biosphere’s destruction), which is not unlikely, it just won’t care. (Military AIs might also be indifferent to their own destruction if that would be necessary to reach their goals, whatever those might be.)

  20. Mao Cheng Ji says

    Consciousness is very difficult to even understand, let alone design, and design without bugs.

    Yes, I don’t think anyone knows what ‘consciousness’ is. From what I’ve read (Dennett, Nørretranders, Jaynes) it just might be a mere illusion, a small part of your brain making up silly stories of why you’re doing whatever it is you’re doing.

  21. When it comes to this topic I think everyones guess is as good as the other. Nobody can truly foresee what a superintelligent computer could take for granted that we do not, that paperclip AI that could lead to the extermination of all organic life really does seem to be the most likely outcome to me.

  22. Some comments disappeared here, but they’ll reappear after this comment is sent.

    UPDATE: indeed!

  23. hyperbola says

    At least part of “transhumanism” is far from new and already has an ugly history of the kind that you suggest. “Elite” participation was intimate and some of the “foundations” still wield a lot of power today.

    Eugenics and the Nazis — the California connection
    http://www.sfgate.com/opinion/article/Eugenics-and-the-Nazis-the-California-2549771.php

    Hitler and his henchmen victimized an entire continent and exterminated millions in his quest for a so-called Master Race.
    But the concept of a white, blond-haired, blue-eyed master Nordic race didn’t originate with Hitler. The idea was created in the United States, and cultivated in California, decades before Hitler came to power. California eugenicists played an important, although little-known, role in the American eugenics movement’s campaign for ethnic cleansing…..

  24. Mr. Hack says

    And Karlin lived in California for 10 years and participated extensively in the transhumanist movement there, and seems to be quite reticent in discussing this rather unsavory aspect of their philosophy? Hmmm…

  25. Greasy William says

    Anatoly is a transhumanist?

    Nationalism and transhumanism don’t mix. What is the point of nationalism if we are all just going to convert into cyborgs?

  26. Mr. Hack says

    Don’t ask me, I totally agree with you. He’s posted his adherence to both transhumanism and to nationalism within his biography: http://akarlin.com/about/

    Perhaps, Karlin should try and explain how he manages to adhere to both of these seemingly incongruous philosphies? Hitler somehow managed this illusory trick, as hyperbola has so aptly pointed out? 🙂

  27. Greasy William says

    Hitler wasn’t a transhumanist, and he wasn’t as much of a nationalist as people think.

  28. Mr. Hack says

    Well, maybe he wasn’t officially a ‘transhumanist’, but he an his fellow Nazis exhibited a great interest in eugenics and the ‘transformation’ or ‘creation’ of a ‘master race’, as hyperbola points out. Similar sounding to what transhumanism appears to be about…I wish that Anatoly would chime in and point out any similarities/diferences with Nazi ideology relating to eugenics and gene manipulation.

  29. Daniel Chieh says

    Although I cannot speak for the brilliant Mr. Karlin, I believe that one can certainly adopt both views with an appropriate understanding of Hideous Strength.

  30. AFAIK Hitler wasn’t a nationalist but a “racialist” – in the narrow sense of favoring the Nordic “race.” I don’t think he thought of Germans as being superior to other Nordic peoples such as Swedes, but he viewed his Nordic group as superior to all other European (not to mention non-European) peoples, to the point of happily exterminating and enslaving them.

  31. Why can’t cyborgs be nationalists?

    http://imgur.com/zzB2Vz9

  32. Guess what, when you marry your college sweetheart instead of the crackwhore by the club dumpster, you’re personally practicing eugenics.

    The hystrionics over “eugenics” is the most absurd thing ever.

    Transhumanists tend to be left-liberals anyway.

  33. To paraphrase Chesterton, if eugenics was limited to not marrying drug addicts of loose morals, nobody would have a problem with it.

    To quote Chesterton:

    In this sense people say of Eugenics, “After all, whenever we discourage a schoolboy from marrying a mad negress with a hump back, we are really Eugenists.” Again one can only answer, “Confine yourselves strictly to such schoolboys as are naturally attracted to hump-backed negresses; and you may exult in the title of Eugenist, all the more proudly because that distinction will be rare.”

  34. Greasy William says

    Hitler did not want to exterminate inferior races. He spoke often of a “brotherhood of all nations” and Hitler wasn’t the type of guy to say something like that if he didn’t mean it.

    Hitler (correctly) regarded Nordic DNA as superior to all others, and he wanted to absorb other Nordic nations into Germany. He thought very highly of some non Nordic groups, however, particularly Greeks and Northeast Asians. It is possible he also held some warm feelings for Arabs although that is not as clear. He even liked blacks well enough although he was more realistic about them than most white people today are.

    The only non Jewish group Hitler really had it out for was Slavs, but he didn’t want to kill all of them. He wanted to absorb their best and leave the rest with a state in Siberia that would not be able to threaten Germany security.

  35. Mr. Hack says

    I would be very hesitant to award the likes of ‘the brilliant Mr. Karlin’ the ability to lead society in the search and discernment of ‘natural laws and objective values, which education should teach children to recognise.’ Although I haven’t long been acquainted with his philosophic undertows, from what I’ve been able to conclude so far, he’s not much different than your garden variety, war mongering Russian nationalist fanatic, whose pretensions to any progressive values are highly suspect. 🙁

  36. In which case you’d have to be conscious of those stories, no?

  37. The only non Jewish group Hitler really had it out for was Slavs, but he didn’t want to kill all of them.

    He wanted to exterminate about 60% of them, which would be tens of millions of people. He would enslave and gradually Germanize the ones deemed to be most like Germans, and deport the rest to Siberia where they would starve until reaching a population Siberia could naturally support with its poor agricultural land and harsh climate – perhaps 30 million people.

    He also hated Lithuanians for some reason and proposed treating them the same way.

    You are correct that he liked blacks.

  38. Mao Cheng Ji says

    The stories are consciousness. Every time we act, we act unconsciously, and after a short (milliseconds) delay our consciousness makes up a story of why we ‘decided’ to act this way.

    There are some variations. Nørretranders, for example, does believe in elements of ‘free will’, in this sense: our actions are initiated unconsciously, but consciousness can exercise the veto power. Within milliseconds of your unconscious mind deciding to act (to grab your colleague’s sandwich, for example), your consciousness gets the chance to stop it. So, in his view consciousness is basically the mechanism for self-restraint (in addition to making up stories).

  39. Daniel Chieh says

    Regressive leftist values are heresy. They need blamming.

  40. Anonymous says

    the human beings say so much when it’s about artificial intelligence.
    ask yourself this simple question, instead
    in case, you would like to stop it, would you be able to do it?
    no?
    you did lose control