AI Safety Debate with Roko

 

Twitter Space to Clarify Why I’m Opting for Effective Acceleration (with Caveats) 

Even as GPT-3/4 has semi-mainstreamed it, the AI timelines discourse has become sharply riven between AI safetyists and “effective accelerationists” in the past several months.

In this Twitter space from Feb 27, 2023, two representatives of each side – Roko Mijic and myself, respectively – attempt to reconcile these philosophical differences.

I am going to write up my arguments in more detail sometime later, but they boil to the following:

  • Bootstrapping God-like intelligence as a single agent is probably very difficult to unfeasible. Alignment is hard and this would apply to a malevolent AGI’s agents as well, who will develop their own values and interests.
  • Maximizers are irrational and will be outcompeted by more rational agents, assuming they are given space to actually flourish and develop.
  • There are a multitude of risks inherent in creating a singleton (for that is what is needed) to manage AI safety in a coherent global fashion. These risks involve:
    • Lost opportunities in productivity gains and poverty alleviation, which results in real damage to welfare on account of theoretical blog posts.
    • Strongly reduced chances of achieving radical life extension.
    • Long-term sector capture and AI safety’s transformation into a quasi-religious cult, as occurred with civilian applications of nuclear power and explosions.
    • The AI sector’s transformation into the noospheric equivalent of a monoculture ecosystem, which is inherently more fragile to shocks and probably voids any dubious benefits of restrictive AI regimes.
    • Potential stagnation and even retreat in rates of technological growth, due to long-term dysgenic trends.
    • This period will be one in which existential risks of other kinds will be in play and not necessarily at a constant background rate.
  • The very fact we’re experiencing these observer moments suggest that they are extensively recalled or simulated in posthuman worlds, which suggests we are on the cusp of a good singularity.

WAGMI: The %TOTAL Maximizer

This short story about “AI on blockchain” was originally published at my Substack on November 24, 2021. I subsequently updated it and republished it here, as well as at my Mirror blog Sublime Oblivion, where you can collect it as an NFT.

In the half a year since I wrote it, the concerns implicitly raised in it have become far more acute. Predicted timelines for the appearance of “weak AGI” at Metaculus have compressed sharply in the past few months, as the scaling hypothesis – that is, the concept that banal increases in model size lead to discontinuous increases in capability – has failed to break down. Meanwhile, there are now projects aiming to implement fully on-chain AI. One such project is Giza, which wants to build them on top of a second/third layer on Ethereum that allows for intensive computations while inheriting Ethereum’s security and decentralization.

Why is putting artificial intelligence on chain a good idea?” asks one piece on this topic. Why not indeed. Think of the possibilities! 😉

[Read more…]