UBI Is Programmed. (Like It or Not)

I have been an advocate of UBI for a long time. Until relatively recently, I didn’t consider it imminent, short of some cardinal change in automation levels. But everything changed post-GPT 4 and I now consider it a near inevitability. I hope this should now as obvious to anyone as it is to me, but if not, let me briefly explicate.

One of two things are going to happen in our world this decade.

In one world, the AI revolution continues steaming ahead, to the point where we transition into the realms beyond whatever they might be and discover what the purpose of it all really was. In the intervening period, there is going to be extensive automation, fast productivity growth, and soaring inequality – with the end result that “crisis of overproduction” will cease to just be a Marxist meme. As human labor devalues – including, even especially, that of symbolic analysts who constitute the core of the politically hallowed “middle class” – purchasing power will have to be redistributed just to prevent AI rentiers from hogging almost all the surplus generated by “reserve armies of labor” that constitute human-level AIs. (It’s very telling and remarkably unremarked that Sam Altman has seen the writing on the wall many years in advance and Worldcoin is coming to fruition along with the advanced LLMs).

The other world, the one I will explore here, is one in which the AI safetyists win and impose hard AI controls, including hardware restrictions.

However, even if things otherwise remain the same, this world will be a very different world from the one we had before simply by dint of elites having made the choice not to tread a certain path.

This path may or may not have lead to ultimate doom (opinions differ). However, what is certain is that not taking that path will condemn billions of people to continued wageslavery, “rescuing” them from an alternate timeline in which they get to enjoy lives of arbitrary leisure, material abundance, and indefinite physical resilience. This decision will have been made by a narrow elite, one that’s overwhelming White, male, Western, and extremely privileged even by the standards of that reference class.

In this context, preventing UBI will simply be politically unrealistic.

The reactionaries will exclaim about how it’s not financially sustainable, that it will cause hyperinflation, that it will cause plutocrats to emigrate to Dubai or Singapore, that it will devalue the “meaning” in people’s lives accruing from their “vocation” (of flipping burgers and filling in spreadsheets).

(1) The economic sustainability arguments rest on flawed assumptions, as I covered in my review of Andrew Yang’s book The War on Normal People. Minimal UBI of $1,000 per month in the US can be financed with a 10% VAT tax, and far more generous programs are possible if government spending as a share of GDP was to increase from its current ~40% level to the 70% level seen in that infamous dystopia – Sweden c.1990.

(2) Most rich people rather like their own countries and wouldn’t emigrate regardless. If anything, we actually consistently see rich people from poorer, lower-tax but lower trust, less environmentally friendly, worse rule of law countries emigrating to higher tax polities that provision a higher quality of life in other spheres. Furthermore, tax havens can be strongarmed into raising their own taxes by a coalition that includes the US, the EU, and China. In a world in which they successfully cap compute, this kind of thing would be trivial to accomplish in comparison.

(3) As regards ideas around the sanctity of work, this just propagates the cruel fiction that most people’s jobs actually have any value or meaning; for most normies, it’s just about clocking in the hours so that they can put food on their table, provide for their family, maybe have a holiday or two per year. It’s probably not a coincidence that the people who make this argument tend to be ivory tower academics, think-tank ideologues, and op-ed wordcels who enjoy the extremely unusual privilege of making a living from jotting down whatever comes into their head. But so far as normal people are concerned, employment is often a soul-sucking experience and one that more and more of them with every generation are seeking to substitute with more meaningful activities, such as leveling up their RPG character in video games (and some of the safetyists even want to deprive them of their GPUs).

And anyhow this is not even the point, since this is not a fucking economic issue any more but a moral one.

The burden borne by AI safetyists is not a light one. You are killing hundreds of millions, possibly billions, prematurely. You are condemning billions more to wearisome toil and injury. You’re closing off multiple lines of other technological advancement if AI control is to be anything more than a temporary bandaid. You are doing it based on your conviction which may or may not be true that the alternative is a sufficiently high risk of human extinction. That is fine. I realize that most EA people want to “make up” for AI control by massively expanding spending on things like life extension and genomics of IQ. That’s great, but in the real world, the elites are only going to give you the AI control part of the package, not the Biosingularity one. I will also state that technological accelerationists have an analogous culpability in that their proposed policies or lack thereof may result in AI catastrophe from viciously lethal pandemics through to human extinction, and everything in between Yudkowsky and Beff Jezos is ultimately just a matter of triangulating between those largely inscrutable tradeoffs.

But one of the minimal obligations that you as an AI safetyist owe to society is to acknowledge tradeoff and to try to compensate the people you deprived (wisely or not) of a brighter future. Taking away the promise of nirvana for the sake of some nebulous “Humanity First” principle, in a way that in all likelihood will have zilch to do with any democratic process, while opposing some minimum level of economic welfare and dignity for those same humans you supposedly champion, strikes me as wrong-headed in the most best possible interpretation.

And in all likelihood it won’t even end well for you.

In one of my old posts on UBI, I speculated that it would become inevitable across the entire world once implemented by a “big” country:

So you know how in the Civilization strategy games that once the first country adopts Democracy, all the other countries start getting an unhappiness penalty for avoiding it?

I think it will be the same for UBI.

If UBI is a success in the US, other countries will come under overwhelming domestic pressure to adopt it as well.

UBI will soon be adopted, either in our own world, if AI acceleration continues, or in hypothetical alternate worlds where it wasn’t strangled in its cradle.

The workers will learn of and study this alternate world.

And they will demand their fair due from the elites who, right or wrong, led them down the path of (relative) poverty and exploitation.

Unless you intend to transform the world into a global totalitarian dystopia, any sustainable long-term technological control regime will require at least the passive consent of its subjects. That consent and solidarity will be much harder to come by when a large percentage of them feel they were swindled out of their early 21C birthright as inheritors of a Singularity. Some of them will seek to actively undermine it. We can probably all agree than an AI superintelligence born of underground subterfuge and class resentment will be a less than optimal one in the universe of all possible AI superintelligences.