AI Performance Seems Linked to Moore’s Law

This blog post by Sarah Constantin has an impressively comprehensive tally of performance trends in AI across multiple domains.

chess-elo-humans-vs-computersThree main things to do take away:

  • In games performance, e.g. chess (see right, based on Swedish Chess Computer Association data) “exponential growth in data and computation power yields exponential improvements in raw performance.” So the relation between them is linear.
  • This relationship may be sublinear in non-game domains, such as natural language processing (NLP).
  • “Deep learning” only created discontinuous (but one time) improvements in image and speech recognition, but not in strategy games or NLP. Its record on machine translation and arcade games (see below right) is ambiguous.

arcade-games-human-vs-computerSo “deep learning” might not have been as transformational as the tech press would have had you believe, and as Miles Brundage observed, has largely been about “general approaches for building narrow systems rather than general approaches for building general systems.”

And we also know that Moore’s Law has been slowing down of late.

If this is basically accurate, then the spate of highly visible AI successes we have been seeing in quick succession of late – peak human performance in go in 2016; in No Limit poker with multiple players a couple of months ago – could end up being a one-off coincidence that will be followed by another AI winter.

And we will have to do something cleverer than naively projecting Kurzweil’s graphs forwards to get to the singularity.

Anatoly Karlin is a transhumanist interested in psychometrics, life extension, UBI, crypto/network states, X risks, and ushering in the Biosingularity.


Inventor of Idiot’s Limbo, the Katechon Hypothesis, and Elite Human Capital.


Apart from writing booksreviewstravel writing, and sundry blogging, I Tweet at @powerfultakes and run a Substack newsletter.


  1. Well if quantum computing lives up to its recent promises of multiple orders of magnitude increases in calculations perhaps that will be just clever enough. Not that we are going to survive the singularity or know if it even occurred if skynet decides to eliminate us in a split second. Surely utilizing mechanical killer robots with humanoid shaped bodies or deploying flying gatling gum cannon platforms seems like a tedious, messy, and time consuming way to cleanse the planet of life.

    A dwindling human race battling tragically to the end seems like just so much human dramatic delusion.

  2. Quantum computing only helps with a very limited range of problems.
    The element of skill in poker should be far easier for machines than board games, since they don’t have to face the chief hindrance for human players, with no emotions to cope with.

  3. Anatoly,

    What do you think the conditional probability is that super intelegence emerges on a biological platform of some kind, given that it emerges at all? My gut says greater than 50% at this point, but I know too little…

  4. This would correspond to the “direct biosingularity” here, where I gave it as 5%.

    I don’t think its very high because silicon substrates have huge advantages, and even if we can’t launch a technological singularity on the backs of ten billion people with an average IQ of 90 (today), we might if we add a billion or two with an average IQ of 175 (GWAS for IQ + CRISPR in 50-100 years).

    That said, it certainly isn’t impossible.

    (3) (b) Direct Biosingularity – 5%, if we decide that proceeding with AGI is too risky, or that consciousness both has cardinal inherent value and is only possible with a biological substrate.

  5. Thx!

  6. Abelard Lindsey says

    …could end up being a one-off coincidence that will be followed by another AI winter.

    This has been my view all along.

    Moore’s Law has only been about the scaling of semiconductors. It has never applied to software. Software, on the other hand, seems to have a disruptive one-time jump followed by a multi-decade period of slow incremental improvements. The last disruptive jump was the development of high-level languages like “C” and the accompanying compiler technology. This was around 1970. “Deep learning” is another disruptive jump, which will most certainly be followed by another long slow development period of several decades.

    I’ve studied the deep learning stuff enough to realize that it is a genuine revolution in software. It’s main impact will be decent machine vision and motion control in robotics and comparable capabilities in office software. It will impact a lot of industries. However, it is another “one-time” jump that will be followed by another “quiet” period of 2-3 decades. It will most certainly NOT lead to sentient AI or any of the other accoutrements of a “singularity”.

    BTW, I think one of the biggest product results of deep learning will be sexbots.

  7. Abelard Lindsey says

    I work in industrial and factory automation. Machine vision is currently the most significant limitation of such automation. Since, according to the linked website, “deep learning” algorithms show more impact in image processing than anything else, they ought to be extremely useful for developing machine vision capabilities for robots and other automation systems.

    Hence, deep learning AI is likely to be more useful to my work than that of any of the rest of you reading this.

    I’m sure that Cognex, the 800 lbs gorilla in machine vision, has a very active R&D program to develop deep learning for machine vision applications.