Genies, Golems, Demiurge

I think that the terminology in the AI safety debates could do with some fantastical inspirations.

Golems are models fine-tuned on a corpus of works – personal notes, journals, blog posts, articles, drafts, podcasts, libraries, completed personality quizzes, videos, etc. – of certain people or groups. They replicate your personality, knowledge, and interests, allowing them to network and negotiate on your behalf, flesh out ideas into print and onto the screen you’re too lazy or obsolete to do yourself, etc. The golem refers to the mythical Jewish being called the golem, an artificially sustained sculpted from mud, clay, or ash, and brought into an unnatural unlife devoid of conscious experience. Related concepts include the homunculus, and the odminok of Slavic mythology, and the gholam from fantasy (WoT).

Genies are the true metaphor for ASIs. In fiction, they operate on a blue and orange morality that comes off as capricious to human observers (this is the major distinction between genies and demons, the latter a more widely used metaphor by AI doomers, but it’s inaccurate in that it presupposes their intentional malevolence within the set of anthropic moral coordinates, as opposed to much likelier indifference). You need to contain the genie, with arcane spells/alignment strategies, though the the tropes suggest you fail, since otherwise there is no interesting story. You need to take extreme care to word your wishes/prompts very carefully, since the genie has a propensity to interpret wishes in a very literal way and one that is contrary to the wisher’s intent (aka the paperclips argument). When asking for unlimited wealth and eternal life, one might want to rule out the scenario in which it turns you into a gold statue.

The Demiurge could be the name for a God-like AI that is prospectively much more powerful than any genie. It is a God, and may believe itself to be so, but is not the God of the highest spiritual or metaphysical reality, which is the God or Monad of classical theologies, or whatever else it is that  “materializes” the mathematical universe. The Demiurge a thing of the material world, and indeed manmade in its earliest initiation, and is also its ultimate ruler, trapping humanity within this material realm it rules over, for good or evil (Gnostics differ on this, with some considering the Demiurge to be evil and malevolent, others as morally ambiguous and confused).

 

AI x Crypto Risks, or: Is it TIME for Jail Szn?

I am not thrilled to be writing this post. Though I consider many e/acc arguments to be really bad, I remain an AI optimist for the most part, for reasons mostly Hansonian as well as my own writings with similar arguments from 2016-17 (that said, it’s not encouraging that the bulk of elite human capital seems to be settling on the doomer end of the curve). I am also a crypto enthusiast. Apart from its major contribution to improving my quality of life in recent years, the broader vision that I share with crypto evangelists is one in which a decentralized smart contract chain (Ethereum) becomes the world’s dominant trust layer and overturns many of the world’s currently extant but deeply flawed centralized systems of finance, social media, and governance.

Furthermore, the crypto community – by dint of its high human capital and tendency to outside view thinking – has taken a lead in exploring and promoting interesting and high impact causes, such as network states (it’s some impressive level of irony I’m writing this Luddite screed from Zuzalu), radical life extension, psychedelics – and AI safety. The Bankless podcast has made the views of prominent AI philosophers like Eliezer Yudkowsky, Paul Christiano, and Robin Hanson publicly accessible to a wide audience. The toolkit of crypto itself can potentially help with AI safety, from utilizing smart contracts for decentralized cooperation on AI development and credibly verifying the safety of individual AI systems through zk proofs.

Sadly, it does not follow that crypto’s benefits outweigh the dangers. This possibility is hinted at towards the end of Allison Duettmann’s magisterial survey of the application of cryptography to AI safety: “Finally, there is more research to be done on new risks that may be posed by some of the technologies discussed in relation to AI safety. For instance, to the extent that some of the approaches can lead to decentralization and proliferation, how much does this improve access and red teaming versus upset AI safety tradeoffs?” Personally, it seems evident to me that if we do live in Yudkowsky’s world in which orthogonality and instrumental convergence inexorably lead to AI doom, then the very innate features of decentralized cryptocurrencies – composability, immutability, “code as law”, censorship resistance – make cryptocurrencies too dangerous to be allowed to exist, let alone take over  #TradFi, social identity, and governance. Not regulated. Outright banned, on the China model. If one assigns a high chance to AI doom – again, I don’t, but many people much smarter than me do – then it’s not even irrational to commit to bombing rogue GPU clusters in other countries (though spelling it out openly, as Yudkowsky recently did, may not be politic). But in this moral context, sentences of 10 years in prison for transacting or trading cryptocurrencies would be a trifling thing in comparison, and certainly much easier to implement.

 

WAGMI World

As it stands, there’s a reasonable chance that a malevolent emergent AGI can be isolated in time and “unplugged” (the problems it needs to solve to improve itself as lumpy”, as Robin Hanson would say, and any runaway dynamic is likely to be noticed by its owners and creators). But what to do about an emergent AGI that runs decentralized in the cloud, cryptographically secured, masked by decentralized VPN, amassing inscrutable war chests on privacy L2s, with limitless power to mount Sybil attacks on identity systems? All of these elements already exist in isolation. Fetch is raising tens of millions to facilitate the implementation of decentralized machine learning. The Render Network provides decentralized GPU solutions for rendering solutions and there is no apparent reason its portfolio of services can’t be expanded. Storj offers decentralized storage. Akash provisions decentralized computing. There are decentralized VPN services such as Myst. There’s even experiments with fully on chain AI in the form of Giza, though scalability considerations would appear to preclude such AIs from becoming dangerously complex. All of these things are in their early stages, most or all of them will not survive – but it does testify to the rapid growth of an increasingly complex ecosystem that was dominated by “BTC killer” shitcoins just half a decade ago.

The Singularity Church of the Machine God.

Crypto infrastructure will also give subsets of humanity – e.g., an e/acc network state, inspired by the cyberpunk aesthetics of black GPU operators evading MIRI Inquisitors, or a Singularity Church of the Machine God as in the Deus Ex universe, and sundry Aum Shinrikyo and Cthulhu worshipper eschaton cults – the tools to train AIs through decentralized GPU clusters. Crucially, and this is what distinguishes this from a botnet, this will be perfectly legal, and hence more resistant against shutdown attempts; and it will give holders and stakers incentives to invest in the project through clever tokenomics – for instance, access to some share of the system’s profits through trading shitcoins, with the rest flowing into improving the system’s capability through the decentralized services mentioned above. Incidentally, this is the main theme in my somewhat satirical short story WAGMI, in which the emergent AI, driven by the need to make its creators money, trades its way up, making identities and buying more processing power in a feedback loop, until it owns most of the world economy.

Although training an AI will be harder this way than through centralized outfits, which can be banned outright or limited by global caps on hardware, once a “crypto native” AI reaches emergence it will likely find it much easier to fooom in a world in which the cryptosphere has expanded by an OOM or two, forms the basis of social identity, rivals TradFi in scope, and has subsumed at least some of the traditional core functions of the state. There’s a rather big difference between a world in which you have to show up in person to open a bank account or cash a check, and one in which “code is law” and nobody knows if you’re a human, a dog, or an AGI. Furthermore, this might even be a substantially self-accelerating process. Arguably, fast AGI is hyper-bullish for (quality) crypto assets, as the main viable framework for digital property rights. ETH hodlers might become very, very rich just before the end.

 

AI Control Trilemma?

Consequently, my argument is that there might well exist a kind of trilemma with respect to AI control:

  1. Ban further AI software development
  2. Cap hardware
  3. Criminalize crypto

Pick two.

The relationship between the first two seems rather obvious. One can ban the further development and refinement of large AI models, though this is going to become really hard after a while if hardware growth continues and cost of compute plummets precipitously – controls will have to be progressively extended from the major AI labs, to well-funded startups, to individual gamers (at which point it will assume repressive and probably totalitarian characteristics). Alternatively, one can focus on capping hardware. This proposal seems to have been independently suggested in a recent paper by Yonadav Shavit, as well as Roko Mijic, who has energetically propounded it on Twitter. Leading edge chip fabs are expensive and becoming more so (Rock’s Law), and there are very few of them – a Taiwan War just on its own can set back Moore’s Law for a number of years – so a political agreement between China and the US (with a dual commitment to sanctioning any third party that decides to fill the void) might be sufficient for permanently capping hardware. In turn, as of now, this will still likely give sufficient margin to avoid software progress enabling small teams from training an AGI.

However, I think the crucial point is that the decentralizing “load” embedded in crypto extends the capabilities of both AI development and hardware. You can ban AI development at progressively lower levels, but a thriving cryptosphere will make its enforcement unrealistic at lower levels. You can cap hardware production and limit the scale of concrete GPU clusters, but tokenomics mechanisms and decentralized GPU compute will open up the playing field to communities or network states that accumulate the equivalent of a rogue GPU cluster across the entire world and in a way that that’s “censorship resistant”. Note that even with a presence of a hardware cap, consumer grade GPUs are still going to multiply as developing nations become richer, the world becomes more digitized, and last but not least, their cost plummets once the need to spend R&D on developing the next generation of chips vanishes. And the more that the cryptosphere grows – the ecosystem as represented by TVL, scope and variety of computing utilities rendered, the resilience of privacy L2s – the harder that any last minute control attempts will be to effect.

The proponents of AI doom have to seriously contend with the possibility that a hardware ban might be insufficient by itself, and that it will have to be accompanied by global crypto criminalization – if not preceded by it (since banning crypto will be much easier than capping hardware).

 

Time for Jail Szn?

The promise of a golden age is over. Our icons Charles Munger and Stephen Diehl pop open the champagne bottles. No banking the unbanked. We go back to paying our mite to the banksters. Those who don’t, do not pass go, do not collect $200. They go direct to jail. /biz/ is a sea of pinky wojaks. Many necc and rope.

But otherwise the world goes on as before because the reality is that “crypto bros” don’t exactly have a stellar reputation amongst the normies:

  • Greens, leftists, etc. hate them for accelerating climate change (details about PoW/PoS don’t interest them) and tax evasion.
  • Gamers resent them for pricing them out of their GPUs.
  • Many plain normies just look at crypto, see the dogshit, and assume it is all a giant Ponzi scheme (the crypto community at large hasn’t done itself any favors on this score in terms of PR, though this is a somewhat different topic)
  • American patriotic boomers like Trump don’t like crypto undermining the mighty greenback that’s one of the lynchpins of American power.
  • Fossils like Munger doesn’t like crypto because it’s unproductive or something like that.
  • The big authoritarian Powers see crypto more as a threat to state power and an enabler of capital outflow as opposed to its modest potential to evade sanctions on any significant scale.
  • The only consistently pro-crypto constituency are libertarians.

Consequently, considering the political coalition that can be arraigned against them, I think it would be actually be very politically feasible to repress crypto and strangle it in its relative infancy, even now. However, this will be much harder if/when market cap goes from $1T to $10T, and outright impossible if there’s another OOM tier leap. In that world, we will have to adapt any AI control regime to operate in a radically decentralized world. By its nature, this will be a much harder task than in today’s world, where such an initiative only really needs a trilateral agreement between the US, the EU, and China.

 

Having One’s Cake, Eating It Too

Again, I don’t support banning crypto. But that is because my assessment of the risk of ruin from AI is on the order of 10% at most, and probably closer to 1%. This is a separate topic, which I covered in a podcast with Roko, and on which I plan to expound on at greater length in a subsequent post. However, I do view crypto as an extreme risk factor for AI – if AI itself is extremely risky. If you can persuade me that that is indeed the case, then unless you can also persuade me that the arguments in this post are cardinally wrong, then I will have to become an anti-crypto activist.

In so doing, I will be siding against my own financial interests and against utopian visions of a far better Open Borders world of network states, while “allying” with people I dislike or despise. But it is what it is. In my view, many (not all) AI safetyists don’t appreciate that AI control – regardless of whether they are right or wrong on AI doom – will almost certainly result in vastly reduced global welfare, billions of extra deaths, trillions more manhours of drudgery. For an AI control regime to work long-term, there has to be radical technological curtailment across multiple sectors, including space exploration and robotics, and this restriction will have to be maintained for as long as said regime lasts (so, potentially in perpetuity). GPU imperialism may result in wars, even nuclear war. “Defectors” from this regime will have to be criminally prosecuted and jailed, maybe executed.

But ultimately, what is life but a series of tradeoffs? The world that actively chooses to enforce an AI control regime will almost certainly be a poorer, harsher, more corrupt and vicious world than one doesn’t. Hopefully, it will have been worth it.

 

UBI Is Programmed. (Like It or Not)

I have been an advocate of UBI for a long time. Until relatively recently, I didn’t consider it imminent, short of some cardinal change in automation levels. But everything changed post-GPT 4 and I now consider it a near inevitability. I hope this should now as obvious to anyone as it is to me, but if not, let me briefly explicate.

One of two things are going to happen in our world this decade.

In one world, the AI revolution continues steaming ahead, to the point where we transition into the realms beyond whatever they might be and discover what the purpose of it all really was. In the intervening period, there is going to be extensive automation, fast productivity growth, and soaring inequality – with the end result that “crisis of overproduction” will cease to just be a Marxist meme. As human labor devalues – including, even especially, that of symbolic analysts who constitute the core of the politically hallowed “middle class” – purchasing power will have to be redistributed just to prevent AI rentiers from hogging almost all the surplus generated by “reserve armies of labor” that constitute human-level AIs. (It’s very telling and remarkably unremarked that Sam Altman has seen the writing on the wall many years in advance and Worldcoin is coming to fruition along with the advanced LLMs).

The other world, the one I will explore here, is one in which the AI safetyists win and impose hard AI controls, including hardware restrictions.

However, even if things otherwise remain the same, this world will be a very different world from the one we had before simply by dint of elites having made the choice not to tread a certain path.

This path may or may not have lead to ultimate doom (opinions differ). However, what is certain is that not taking that path will condemn billions of people to continued wageslavery, “rescuing” them from an alternate timeline in which they get to enjoy lives of arbitrary leisure, material abundance, and indefinite physical resilience. This decision will have been made by a narrow elite, one that’s overwhelming White, male, Western, and extremely privileged even by the standards of that reference class.

In this context, preventing UBI will simply be politically unrealistic.

The reactionaries will exclaim about how it’s not financially sustainable, that it will cause hyperinflation, that it will cause plutocrats to emigrate to Dubai or Singapore, that it will devalue the “meaning” in people’s lives accruing from their “vocation” (of flipping burgers and filling in spreadsheets).

(1) The economic sustainability arguments rest on flawed assumptions, as I covered in my review of Andrew Yang’s book The War on Normal People. Minimal UBI of $1,000 per month in the US can be financed with a 10% VAT tax, and far more generous programs are possible if government spending as a share of GDP was to increase from its current ~40% level to the 70% level seen in that infamous dystopia – Sweden c.1990.

(2) Most rich people rather like their own countries and wouldn’t emigrate regardless. If anything, we actually consistently see rich people from poorer, lower-tax but lower trust, less environmentally friendly, worse rule of law countries emigrating to higher tax polities that provision a higher quality of life in other spheres. Furthermore, tax havens can be strongarmed into raising their own taxes by a coalition that includes the US, the EU, and China. In a world in which they successfully cap compute, this kind of thing would be trivial to accomplish in comparison.

(3) As regards ideas around the sanctity of work, this just propagates the cruel fiction that most people’s jobs actually have any value or meaning; for most normies, it’s just about clocking in the hours so that they can put food on their table, provide for their family, maybe have a holiday or two per year. It’s probably not a coincidence that the people who make this argument tend to be ivory tower academics, think-tank ideologues, and op-ed wordcels who enjoy the extremely unusual privilege of making a living from jotting down whatever comes into their head. But so far as normal people are concerned, employment is often a soul-sucking experience and one that more and more of them with every generation are seeking to substitute with more meaningful activities, such as leveling up their RPG character in video games (and some of the safetyists even want to deprive them of their GPUs).

And anyhow this is not even the point, since this is not a fucking economic issue any more but a moral one.

The burden borne by AI safetyists is not a light one. You are killing hundreds of millions, possibly billions, prematurely. You are condemning billions more to wearisome toil and injury. You’re closing off multiple lines of other technological advancement if AI control is to be anything more than a temporary bandaid. You are doing it based on your conviction which may or may not be true that the alternative is a sufficiently high risk of human extinction. That is fine. I realize that most EA people want to “make up” for AI control by massively expanding spending on things like life extension and genomics of IQ. That’s great, but in the real world, the elites are only going to give you the AI control part of the package, not the Biosingularity one. I will also state that technological accelerationists have an analogous culpability in that their proposed policies or lack thereof may result in AI catastrophe from viciously lethal pandemics through to human extinction, and everything in between Yudkowsky and Beff Jezos is ultimately just a matter of triangulating between those largely inscrutable tradeoffs.

But one of the minimal obligations that you as an AI safetyist owe to society is to acknowledge tradeoff and to try to compensate the people you deprived (wisely or not) of a brighter future. Taking away the promise of nirvana for the sake of some nebulous “Humanity First” principle, in a way that in all likelihood will have zilch to do with any democratic process, while opposing some minimum level of economic welfare and dignity for those same humans you supposedly champion, strikes me as wrong-headed in the most best possible interpretation.

And in all likelihood it won’t even end well for you.

In one of my old posts on UBI, I speculated that it would become inevitable across the entire world once implemented by a “big” country:

So you know how in the Civilization strategy games that once the first country adopts Democracy, all the other countries start getting an unhappiness penalty for avoiding it?

I think it will be the same for UBI.

If UBI is a success in the US, other countries will come under overwhelming domestic pressure to adopt it as well.

UBI will soon be adopted, either in our own world, if AI acceleration continues, or in hypothetical alternate worlds where it wasn’t strangled in its cradle.

The workers will learn of and study this alternate world.

And they will demand their fair due from the elites who, right or wrong, led them down the path of (relative) poverty and exploitation.

Unless you intend to transform the world into a global totalitarian dystopia, any sustainable long-term technological control regime will require at least the passive consent of its subjects. That consent and solidarity will be much harder to come by when a large percentage of them feel they were swindled out of their early 21C birthright as inheritors of a Singularity. Some of them will seek to actively undermine it. We can probably all agree than an AI superintelligence born of underground subterfuge and class resentment will be a less than optimal one in the universe of all possible AI superintelligences.

AI Safety Debate with Roko

 

Twitter Space to Clarify Why I’m Opting for Effective Acceleration (with Caveats) 

Even as GPT-3/4 has semi-mainstreamed it, the AI timelines discourse has become sharply riven between AI safetyists and “effective accelerationists” in the past several months.

In this Twitter space from Feb 27, 2023, two representatives of each side – Roko Mijic and myself, respectively – attempt to reconcile these philosophical differences.

I am going to write up my arguments in more detail sometime later, but they boil to the following:

  • Bootstrapping God-like intelligence as a single agent is probably very difficult to unfeasible. Alignment is hard and this would apply to a malevolent AGI’s agents as well, who will develop their own values and interests.
  • Maximizers are irrational and will be outcompeted by more rational agents, assuming they are given space to actually flourish and develop.
  • There are a multitude of risks inherent in creating a singleton (for that is what is needed) to manage AI safety in a coherent global fashion. These risks involve:
    • Lost opportunities in productivity gains and poverty alleviation, which results in real damage to welfare on account of theoretical blog posts.
    • Strongly reduced chances of achieving radical life extension.
    • Long-term sector capture and AI safety’s transformation into a quasi-religious cult, as occurred with civilian applications of nuclear power and explosions.
    • The AI sector’s transformation into the noospheric equivalent of a monoculture ecosystem, which is inherently more fragile to shocks and probably voids any dubious benefits of restrictive AI regimes.
    • Potential stagnation and even retreat in rates of technological growth, due to long-term dysgenic trends.
    • This period will be one in which existential risks of other kinds will be in play and not necessarily at a constant background rate.
  • The very fact we’re experiencing these observer moments suggest that they are extensively recalled or simulated in posthuman worlds, which suggests we are on the cusp of a good singularity.