Twitter Space to Clarify Why I’m Opting for Effective Acceleration (with Caveats)
Even as GPT-3/4 has semi-mainstreamed it, the AI timelines discourse has become sharply riven between AI safetyists and “effective accelerationists” in the past several months.
In this Twitter space from Feb 27, 2023, two representatives of each side – Roko Mijic and myself, respectively – attempt to reconcile these philosophical differences.
I am going to write up my arguments in more detail sometime later, but they boil to the following:
- Bootstrapping God-like intelligence as a single agent is probably very difficult to unfeasible. Alignment is hard and this would apply to a malevolent AGI’s agents as well, who will develop their own values and interests.
- Maximizers are irrational and will be outcompeted by more rational agents, assuming they are given space to actually flourish and develop.
- There are a multitude of risks inherent in creating a singleton (for that is what is needed) to manage AI safety in a coherent global fashion. These risks involve:
- Lost opportunities in productivity gains and poverty alleviation, which results in real damage to welfare on account of theoretical blog posts.
- Strongly reduced chances of achieving radical life extension.
- Long-term sector capture and AI safety’s transformation into a quasi-religious cult, as occurred with civilian applications of nuclear power and explosions.
- The AI sector’s transformation into the noospheric equivalent of a monoculture ecosystem, which is inherently more fragile to shocks and probably voids any dubious benefits of restrictive AI regimes.
- Potential stagnation and even retreat in rates of technological growth, due to long-term dysgenic trends.
- This period will be one in which existential risks of other kinds will be in play and not necessarily at a constant background rate.
- The very fact we’re experiencing these observer moments suggest that they are extensively recalled or simulated in posthuman worlds, which suggests we are on the cusp of a good singularity.