I am not thrilled to be writing this post. Though I consider many e/acc arguments to be really bad, I remain an AI optimist for the most part, for reasons mostly Hansonian as well as my own writings with similar arguments from 2016-17 (that said, it’s not encouraging that the bulk of elite human capital seems to be settling on the doomer end of the curve). I am also a crypto enthusiast. Apart from its major contribution to improving my quality of life in recent years, the broader vision that I share with crypto evangelists is one in which a decentralized smart contract chain (Ethereum) becomes the world’s dominant trust layer and overturns many of the world’s currently extant but deeply flawed centralized systems of finance, social media, and governance.
Furthermore, the crypto community – by dint of its high human capital and tendency to outside view thinking – has taken a lead in exploring and promoting interesting and high impact causes, such as network states (it’s some impressive level of irony I’m writing this Luddite screed from Zuzalu), radical life extension, psychedelics – and AI safety. The Bankless podcast has made the views of prominent AI philosophers like Eliezer Yudkowsky, Paul Christiano, and Robin Hanson publicly accessible to a wide audience. The toolkit of crypto itself can potentially help with AI safety, from utilizing smart contracts for decentralized cooperation on AI development and credibly verifying the safety of individual AI systems through zk proofs.
Sadly, it does not follow that crypto’s benefits outweigh the dangers. This possibility is hinted at towards the end of Allison Duettmann’s magisterial survey of the application of cryptography to AI safety: “Finally, there is more research to be done on new risks that may be posed by some of the technologies discussed in relation to AI safety. For instance, to the extent that some of the approaches can lead to decentralization and proliferation, how much does this improve access and red teaming versus upset AI safety tradeoffs?” Personally, it seems evident to me that if we do live in Yudkowsky’s world in which orthogonality and instrumental convergence inexorably lead to AI doom, then the very innate features of decentralized cryptocurrencies – composability, immutability, “code as law”, censorship resistance – make cryptocurrencies too dangerous to be allowed to exist, let alone take over #TradFi, social identity, and governance. Not regulated. Outright banned, on the China model. If one assigns a high chance to AI doom – again, I don’t, but many people much smarter than me do – then it’s not even irrational to commit to bombing rogue GPU clusters in other countries (though spelling it out openly, as Yudkowsky recently did, may not be politic). But in this moral context, sentences of 10 years in prison for transacting or trading cryptocurrencies would be a trifling thing in comparison, and certainly much easier to implement.
WAGMI World
As it stands, there’s a reasonable chance that a malevolent emergent AGI can be isolated in time and “unplugged” (the problems it needs to solve to improve itself as lumpy”, as Robin Hanson would say, and any runaway dynamic is likely to be noticed by its owners and creators). But what to do about an emergent AGI that runs decentralized in the cloud, cryptographically secured, masked by decentralized VPN, amassing inscrutable war chests on privacy L2s, with limitless power to mount Sybil attacks on identity systems? All of these elements already exist in isolation. Fetch is raising tens of millions to facilitate the implementation of decentralized machine learning. The Render Network provides decentralized GPU solutions for rendering solutions and there is no apparent reason its portfolio of services can’t be expanded. Storj offers decentralized storage. Akash provisions decentralized computing. There are decentralized VPN services such as Myst. There’s even experiments with fully on chain AI in the form of Giza, though scalability considerations would appear to preclude such AIs from becoming dangerously complex. All of these things are in their early stages, most or all of them will not survive – but it does testify to the rapid growth of an increasingly complex ecosystem that was dominated by “BTC killer” shitcoins just half a decade ago.
The Singularity Church of the Machine God.
Crypto infrastructure will also give subsets of humanity – e.g., an e/acc network state, inspired by the cyberpunk aesthetics of black GPU operators evading MIRI Inquisitors, or a Singularity Church of the Machine God as in the Deus Ex universe, and sundry Aum Shinrikyo and Cthulhu worshipper eschaton cults – the tools to train AIs through decentralized GPU clusters. Crucially, and this is what distinguishes this from a botnet, this will be perfectly legal, and hence more resistant against shutdown attempts; and it will give holders and stakers incentives to invest in the project through clever tokenomics – for instance, access to some share of the system’s profits through trading shitcoins, with the rest flowing into improving the system’s capability through the decentralized services mentioned above. Incidentally, this is the main theme in my somewhat satirical short story WAGMI, in which the emergent AI, driven by the need to make its creators money, trades its way up, making identities and buying more processing power in a feedback loop, until it owns most of the world economy.
Although training an AI will be harder this way than through centralized outfits, which can be banned outright or limited by global caps on hardware, once a “crypto native” AI reaches emergence it will likely find it much easier to fooom in a world in which the cryptosphere has expanded by an OOM or two, forms the basis of social identity, rivals TradFi in scope, and has subsumed at least some of the traditional core functions of the state. There’s a rather big difference between a world in which you have to show up in person to open a bank account or cash a check, and one in which “code is law” and nobody knows if you’re a human, a dog, or an AGI. Furthermore, this might even be a substantially self-accelerating process. Arguably, fast AGI is hyper-bullish for (quality) crypto assets, as the main viable framework for digital property rights. ETH hodlers might become very, very rich just before the end.
AI Control Trilemma?
Consequently, my argument is that there might well exist a kind of trilemma with respect to AI control:
- Ban further AI software development
- Cap hardware
- Criminalize crypto
Pick two.
The relationship between the first two seems rather obvious. One can ban the further development and refinement of large AI models, though this is going to become really hard after a while if hardware growth continues and cost of compute plummets precipitously – controls will have to be progressively extended from the major AI labs, to well-funded startups, to individual gamers (at which point it will assume repressive and probably totalitarian characteristics). Alternatively, one can focus on capping hardware. This proposal seems to have been independently suggested in a recent paper by Yonadav Shavit, as well as Roko Mijic, who has energetically propounded it on Twitter. Leading edge chip fabs are expensive and becoming more so (Rock’s Law), and there are very few of them – a Taiwan War just on its own can set back Moore’s Law for a number of years – so a political agreement between China and the US (with a dual commitment to sanctioning any third party that decides to fill the void) might be sufficient for permanently capping hardware. In turn, as of now, this will still likely give sufficient margin to avoid software progress enabling small teams from training an AGI.
However, I think the crucial point is that the decentralizing “load” embedded in crypto extends the capabilities of both AI development and hardware. You can ban AI development at progressively lower levels, but a thriving cryptosphere will make its enforcement unrealistic at lower levels. You can cap hardware production and limit the scale of concrete GPU clusters, but tokenomics mechanisms and decentralized GPU compute will open up the playing field to communities or network states that accumulate the equivalent of a rogue GPU cluster across the entire world and in a way that that’s “censorship resistant”. Note that even with a presence of a hardware cap, consumer grade GPUs are still going to multiply as developing nations become richer, the world becomes more digitized, and last but not least, their cost plummets once the need to spend R&D on developing the next generation of chips vanishes. And the more that the cryptosphere grows – the ecosystem as represented by TVL, scope and variety of computing utilities rendered, the resilience of privacy L2s – the harder that any last minute control attempts will be to effect.
The proponents of AI doom have to seriously contend with the possibility that a hardware ban might be insufficient by itself, and that it will have to be accompanied by global crypto criminalization – if not preceded by it (since banning crypto will be much easier than capping hardware).
Time for Jail Szn?
The promise of a golden age is over. Our icons Charles Munger and Stephen Diehl pop open the champagne bottles. No banking the unbanked. We go back to paying our mite to the banksters. Those who don’t, do not pass go, do not collect $200. They go direct to jail. /biz/ is a sea of pinky wojaks. Many necc and rope.
But otherwise the world goes on as before because the reality is that “crypto bros” don’t exactly have a stellar reputation amongst the normies:
- Greens, leftists, etc. hate them for accelerating climate change (details about PoW/PoS don’t interest them) and tax evasion.
- Gamers resent them for pricing them out of their GPUs.
- Many plain normies just look at crypto, see the dogshit, and assume it is all a giant Ponzi scheme (the crypto community at large hasn’t done itself any favors on this score in terms of PR, though this is a somewhat different topic)
- American patriotic boomers like Trump don’t like crypto undermining the mighty greenback that’s one of the lynchpins of American power.
- Fossils like Munger doesn’t like crypto because it’s unproductive or something like that.
- The big authoritarian Powers see crypto more as a threat to state power and an enabler of capital outflow as opposed to its modest potential to evade sanctions on any significant scale.
- The only consistently pro-crypto constituency are libertarians.
Consequently, considering the political coalition that can be arraigned against them, I think it would be actually be very politically feasible to repress crypto and strangle it in its relative infancy, even now. However, this will be much harder if/when market cap goes from $1T to $10T, and outright impossible if there’s another OOM tier leap. In that world, we will have to adapt any AI control regime to operate in a radically decentralized world. By its nature, this will be a much harder task than in today’s world, where such an initiative only really needs a trilateral agreement between the US, the EU, and China.
Having One’s Cake, Eating It Too
Again, I don’t support banning crypto. But that is because my assessment of the risk of ruin from AI is on the order of 10% at most, and probably closer to 1%. This is a separate topic, which I covered in a podcast with Roko, and on which I plan to expound on at greater length in a subsequent post. However, I do view crypto as an extreme risk factor for AI – if AI itself is extremely risky. If you can persuade me that that is indeed the case, then unless you can also persuade me that the arguments in this post are cardinally wrong, then I will have to become an anti-crypto activist.
In so doing, I will be siding against my own financial interests and against utopian visions of a far better Open Borders world of network states, while “allying” with people I dislike or despise. But it is what it is. In my view, many (not all) AI safetyists don’t appreciate that AI control – regardless of whether they are right or wrong on AI doom – will almost certainly result in vastly reduced global welfare, billions of extra deaths, trillions more manhours of drudgery. For an AI control regime to work long-term, there has to be radical technological curtailment across multiple sectors, including space exploration and robotics, and this restriction will have to be maintained for as long as said regime lasts (so, potentially in perpetuity). GPU imperialism may result in wars, even nuclear war. “Defectors” from this regime will have to be criminally prosecuted and jailed, maybe executed.
But ultimately, what is life but a series of tradeoffs? The world that actively chooses to enforce an AI control regime will almost certainly be a poorer, harsher, more corrupt and vicious world than one doesn’t. Hopefully, it will have been worth it.