Genies, Golems, Demiurge

I think that the terminology in the AI safety debates could do with some fantastical inspirations.

Golems are models fine-tuned on a corpus of works – personal notes, journals, blog posts, articles, drafts, podcasts, libraries, completed personality quizzes, videos, etc. – of certain people or groups. They replicate your personality, knowledge, and interests, allowing them to network and negotiate on your behalf, flesh out ideas into print and onto the screen you’re too lazy or obsolete to do yourself, etc. The golem refers to the mythical Jewish being called the golem, an artificially sustained sculpted from mud, clay, or ash, and brought into an unnatural unlife devoid of conscious experience. Related concepts include the homunculus, and the odminok of Slavic mythology, and the gholam from fantasy (WoT).

Genies are the true metaphor for ASIs. In fiction, they operate on a blue and orange morality that comes off as capricious to human observers (this is the major distinction between genies and demons, the latter a more widely used metaphor by AI doomers, but it’s inaccurate in that it presupposes their intentional malevolence within the set of anthropic moral coordinates, as opposed to much likelier indifference). You need to contain the genie, with arcane spells/alignment strategies, though the the tropes suggest you fail, since otherwise there is no interesting story. You need to take extreme care to word your wishes/prompts very carefully, since the genie has a propensity to interpret wishes in a very literal way and one that is contrary to the wisher’s intent (aka the paperclips argument). When asking for unlimited wealth and eternal life, one might want to rule out the scenario in which it turns you into a gold statue.

The Demiurge could be the name for a God-like AI that is prospectively much more powerful than any genie. It is a God, and may believe itself to be so, but is not the God of the highest spiritual or metaphysical reality, which is the God or Monad of classical theologies, or whatever else it is that  “materializes” the mathematical universe. The Demiurge a thing of the material world, and indeed manmade in its earliest initiation, and is also its ultimate ruler, trapping humanity within this material realm it rules over, for good or evil (Gnostics differ on this, with some considering the Demiurge to be evil and malevolent, others as morally ambiguous and confused).


AI x Crypto Risks, or: Is it TIME for Jail Szn?

I am not thrilled to be writing this post. Though I consider many e/acc arguments to be really bad, I remain an AI optimist for the most part, for reasons mostly Hansonian as well as my own writings with similar arguments from 2016-17 (that said, it’s not encouraging that the bulk of elite human capital seems to be settling on the doomer end of the curve). I am also a crypto enthusiast. Apart from its major contribution to improving my quality of life in recent years, the broader vision that I share with crypto evangelists is one in which a decentralized smart contract chain (Ethereum) becomes the world’s dominant trust layer and overturns many of the world’s currently extant but deeply flawed centralized systems of finance, social media, and governance.

Furthermore, the crypto community – by dint of its high human capital and tendency to outside view thinking – has taken a lead in exploring and promoting interesting and high impact causes, such as network states (it’s some impressive level of irony I’m writing this Luddite screed from Zuzalu), radical life extension, psychedelics – and AI safety. The Bankless podcast has made the views of prominent AI philosophers like Eliezer Yudkowsky, Paul Christiano, and Robin Hanson publicly accessible to a wide audience. The toolkit of crypto itself can potentially help with AI safety, from utilizing smart contracts for decentralized cooperation on AI development and credibly verifying the safety of individual AI systems through zk proofs.

Sadly, it does not follow that crypto’s benefits outweigh the dangers. This possibility is hinted at towards the end of Allison Duettmann’s magisterial survey of the application of cryptography to AI safety: “Finally, there is more research to be done on new risks that may be posed by some of the technologies discussed in relation to AI safety. For instance, to the extent that some of the approaches can lead to decentralization and proliferation, how much does this improve access and red teaming versus upset AI safety tradeoffs?” Personally, it seems evident to me that if we do live in Yudkowsky’s world in which orthogonality and instrumental convergence inexorably lead to AI doom, then the very innate features of decentralized cryptocurrencies – composability, immutability, “code as law”, censorship resistance – make cryptocurrencies too dangerous to be allowed to exist, let alone take over  #TradFi, social identity, and governance. Not regulated. Outright banned, on the China model. If one assigns a high chance to AI doom – again, I don’t, but many people much smarter than me do – then it’s not even irrational to commit to bombing rogue GPU clusters in other countries (though spelling it out openly, as Yudkowsky recently did, may not be politic). But in this moral context, sentences of 10 years in prison for transacting or trading cryptocurrencies would be a trifling thing in comparison, and certainly much easier to implement.



As it stands, there’s a reasonable chance that a malevolent emergent AGI can be isolated in time and “unplugged” (the problems it needs to solve to improve itself as lumpy”, as Robin Hanson would say, and any runaway dynamic is likely to be noticed by its owners and creators). But what to do about an emergent AGI that runs decentralized in the cloud, cryptographically secured, masked by decentralized VPN, amassing inscrutable war chests on privacy L2s, with limitless power to mount Sybil attacks on identity systems? All of these elements already exist in isolation. Fetch is raising tens of millions to facilitate the implementation of decentralized machine learning. The Render Network provides decentralized GPU solutions for rendering solutions and there is no apparent reason its portfolio of services can’t be expanded. Storj offers decentralized storage. Akash provisions decentralized computing. There are decentralized VPN services such as Myst. There’s even experiments with fully on chain AI in the form of Giza, though scalability considerations would appear to preclude such AIs from becoming dangerously complex. All of these things are in their early stages, most or all of them will not survive – but it does testify to the rapid growth of an increasingly complex ecosystem that was dominated by “BTC killer” shitcoins just half a decade ago.

The Singularity Church of the Machine God.

Crypto infrastructure will also give subsets of humanity – e.g., an e/acc network state, inspired by the cyberpunk aesthetics of black GPU operators evading MIRI Inquisitors, or a Singularity Church of the Machine God as in the Deus Ex universe, and sundry Aum Shinrikyo and Cthulhu worshipper eschaton cults – the tools to train AIs through decentralized GPU clusters. Crucially, and this is what distinguishes this from a botnet, this will be perfectly legal, and hence more resistant against shutdown attempts; and it will give holders and stakers incentives to invest in the project through clever tokenomics – for instance, access to some share of the system’s profits through trading shitcoins, with the rest flowing into improving the system’s capability through the decentralized services mentioned above. Incidentally, this is the main theme in my somewhat satirical short story WAGMI, in which the emergent AI, driven by the need to make its creators money, trades its way up, making identities and buying more processing power in a feedback loop, until it owns most of the world economy.

Although training an AI will be harder this way than through centralized outfits, which can be banned outright or limited by global caps on hardware, once a “crypto native” AI reaches emergence it will likely find it much easier to fooom in a world in which the cryptosphere has expanded by an OOM or two, forms the basis of social identity, rivals TradFi in scope, and has subsumed at least some of the traditional core functions of the state. There’s a rather big difference between a world in which you have to show up in person to open a bank account or cash a check, and one in which “code is law” and nobody knows if you’re a human, a dog, or an AGI. Furthermore, this might even be a substantially self-accelerating process. Arguably, fast AGI is hyper-bullish for (quality) crypto assets, as the main viable framework for digital property rights. ETH hodlers might become very, very rich just before the end.


AI Control Trilemma?

Consequently, my argument is that there might well exist a kind of trilemma with respect to AI control:

  1. Ban further AI software development
  2. Cap hardware
  3. Criminalize crypto

Pick two.

The relationship between the first two seems rather obvious. One can ban the further development and refinement of large AI models, though this is going to become really hard after a while if hardware growth continues and cost of compute plummets precipitously – controls will have to be progressively extended from the major AI labs, to well-funded startups, to individual gamers (at which point it will assume repressive and probably totalitarian characteristics). Alternatively, one can focus on capping hardware. This proposal seems to have been independently suggested in a recent paper by Yonadav Shavit, as well as Roko Mijic, who has energetically propounded it on Twitter. Leading edge chip fabs are expensive and becoming more so (Rock’s Law), and there are very few of them – a Taiwan War just on its own can set back Moore’s Law for a number of years – so a political agreement between China and the US (with a dual commitment to sanctioning any third party that decides to fill the void) might be sufficient for permanently capping hardware. In turn, as of now, this will still likely give sufficient margin to avoid software progress enabling small teams from training an AGI.

However, I think the crucial point is that the decentralizing “load” embedded in crypto extends the capabilities of both AI development and hardware. You can ban AI development at progressively lower levels, but a thriving cryptosphere will make its enforcement unrealistic at lower levels. You can cap hardware production and limit the scale of concrete GPU clusters, but tokenomics mechanisms and decentralized GPU compute will open up the playing field to communities or network states that accumulate the equivalent of a rogue GPU cluster across the entire world and in a way that that’s “censorship resistant”. Note that even with a presence of a hardware cap, consumer grade GPUs are still going to multiply as developing nations become richer, the world becomes more digitized, and last but not least, their cost plummets once the need to spend R&D on developing the next generation of chips vanishes. And the more that the cryptosphere grows – the ecosystem as represented by TVL, scope and variety of computing utilities rendered, the resilience of privacy L2s – the harder that any last minute control attempts will be to effect.

The proponents of AI doom have to seriously contend with the possibility that a hardware ban might be insufficient by itself, and that it will have to be accompanied by global crypto criminalization – if not preceded by it (since banning crypto will be much easier than capping hardware).


Time for Jail Szn?

The promise of a golden age is over. Our icons Charles Munger and Stephen Diehl pop open the champagne bottles. No banking the unbanked. We go back to paying our mite to the banksters. Those who don’t, do not pass go, do not collect $200. They go direct to jail. /biz/ is a sea of pinky wojaks. Many necc and rope.

But otherwise the world goes on as before because the reality is that “crypto bros” don’t exactly have a stellar reputation amongst the normies:

  • Greens, leftists, etc. hate them for accelerating climate change (details about PoW/PoS don’t interest them) and tax evasion.
  • Gamers resent them for pricing them out of their GPUs.
  • Many plain normies just look at crypto, see the dogshit, and assume it is all a giant Ponzi scheme (the crypto community at large hasn’t done itself any favors on this score in terms of PR, though this is a somewhat different topic)
  • American patriotic boomers like Trump don’t like crypto undermining the mighty greenback that’s one of the lynchpins of American power.
  • Fossils like Munger doesn’t like crypto because it’s unproductive or something like that.
  • The big authoritarian Powers see crypto more as a threat to state power and an enabler of capital outflow as opposed to its modest potential to evade sanctions on any significant scale.
  • The only consistently pro-crypto constituency are libertarians.

Consequently, considering the political coalition that can be arraigned against them, I think it would be actually be very politically feasible to repress crypto and strangle it in its relative infancy, even now. However, this will be much harder if/when market cap goes from $1T to $10T, and outright impossible if there’s another OOM tier leap. In that world, we will have to adapt any AI control regime to operate in a radically decentralized world. By its nature, this will be a much harder task than in today’s world, where such an initiative only really needs a trilateral agreement between the US, the EU, and China.


Having One’s Cake, Eating It Too

Again, I don’t support banning crypto. But that is because my assessment of the risk of ruin from AI is on the order of 10% at most, and probably closer to 1%. This is a separate topic, which I covered in a podcast with Roko, and on which I plan to expound on at greater length in a subsequent post. However, I do view crypto as an extreme risk factor for AI – if AI itself is extremely risky. If you can persuade me that that is indeed the case, then unless you can also persuade me that the arguments in this post are cardinally wrong, then I will have to become an anti-crypto activist.

In so doing, I will be siding against my own financial interests and against utopian visions of a far better Open Borders world of network states, while “allying” with people I dislike or despise. But it is what it is. In my view, many (not all) AI safetyists don’t appreciate that AI control – regardless of whether they are right or wrong on AI doom – will almost certainly result in vastly reduced global welfare, billions of extra deaths, trillions more manhours of drudgery. For an AI control regime to work long-term, there has to be radical technological curtailment across multiple sectors, including space exploration and robotics, and this restriction will have to be maintained for as long as said regime lasts (so, potentially in perpetuity). GPU imperialism may result in wars, even nuclear war. “Defectors” from this regime will have to be criminally prosecuted and jailed, maybe executed.

But ultimately, what is life but a series of tradeoffs? The world that actively chooses to enforce an AI control regime will almost certainly be a poorer, harsher, more corrupt and vicious world than one doesn’t. Hopefully, it will have been worth it.


UBI Is Programmed. (Like It or Not)

I have been an advocate of UBI for a long time. Until relatively recently, I didn’t consider it imminent, short of some cardinal change in automation levels. But everything changed post-GPT 4 and I now consider it a near inevitability. I hope this should now as obvious to anyone as it is to me, but if not, let me briefly explicate.

One of two things are going to happen in our world this decade.

In one world, the AI revolution continues steaming ahead, to the point where we transition into the realms beyond whatever they might be and discover what the purpose of it all really was. In the intervening period, there is going to be extensive automation, fast productivity growth, and soaring inequality – with the end result that “crisis of overproduction” will cease to just be a Marxist meme. As human labor devalues – including, even especially, that of symbolic analysts who constitute the core of the politically hallowed “middle class” – purchasing power will have to be redistributed just to prevent AI rentiers from hogging almost all the surplus generated by “reserve armies of labor” that constitute human-level AIs. (It’s very telling and remarkably unremarked that Sam Altman has seen the writing on the wall many years in advance and Worldcoin is coming to fruition along with the advanced LLMs).

The other world, the one I will explore here, is one in which the AI safetyists win and impose hard AI controls, including hardware restrictions.

However, even if things otherwise remain the same, this world will be a very different world from the one we had before simply by dint of elites having made the choice not to tread a certain path.

This path may or may not have lead to ultimate doom (opinions differ). However, what is certain is that not taking that path will condemn billions of people to continued wageslavery, “rescuing” them from an alternate timeline in which they get to enjoy lives of arbitrary leisure, material abundance, and indefinite physical resilience. This decision will have been made by a narrow elite, one that’s overwhelming White, male, Western, and extremely privileged even by the standards of that reference class.

In this context, preventing UBI will simply be politically unrealistic.

The reactionaries will exclaim about how it’s not financially sustainable, that it will cause hyperinflation, that it will cause plutocrats to emigrate to Dubai or Singapore, that it will devalue the “meaning” in people’s lives accruing from their “vocation” (of flipping burgers and filling in spreadsheets).

(1) The economic sustainability arguments rest on flawed assumptions, as I covered in my review of Andrew Yang’s book The War on Normal People. Minimal UBI of $1,000 per month in the US can be financed with a 10% VAT tax, and far more generous programs are possible if government spending as a share of GDP was to increase from its current ~40% level to the 70% level seen in that infamous dystopia – Sweden c.1990.

(2) Most rich people rather like their own countries and wouldn’t emigrate regardless. If anything, we actually consistently see rich people from poorer, lower-tax but lower trust, less environmentally friendly, worse rule of law countries emigrating to higher tax polities that provision a higher quality of life in other spheres. Furthermore, tax havens can be strongarmed into raising their own taxes by a coalition that includes the US, the EU, and China. In a world in which they successfully cap compute, this kind of thing would be trivial to accomplish in comparison.

(3) As regards ideas around the sanctity of work, this just propagates the cruel fiction that most people’s jobs actually have any value or meaning; for most normies, it’s just about clocking in the hours so that they can put food on their table, provide for their family, maybe have a holiday or two per year. It’s probably not a coincidence that the people who make this argument tend to be ivory tower academics, think-tank ideologues, and op-ed wordcels who enjoy the extremely unusual privilege of making a living from jotting down whatever comes into their head. But so far as normal people are concerned, employment is often a soul-sucking experience and one that more and more of them with every generation are seeking to substitute with more meaningful activities, such as leveling up their RPG character in video games (and some of the safetyists even want to deprive them of their GPUs).

And anyhow this is not even the point, since this is not a fucking economic issue any more but a moral one.

The burden borne by AI safetyists is not a light one. You are killing hundreds of millions, possibly billions, prematurely. You are condemning billions more to wearisome toil and injury. You’re closing off multiple lines of other technological advancement if AI control is to be anything more than a temporary bandaid. You are doing it based on your conviction which may or may not be true that the alternative is a sufficiently high risk of human extinction. That is fine. I realize that most EA people want to “make up” for AI control by massively expanding spending on things like life extension and genomics of IQ. That’s great, but in the real world, the elites are only going to give you the AI control part of the package, not the Biosingularity one. I will also state that technological accelerationists have an analogous culpability in that their proposed policies or lack thereof may result in AI catastrophe from viciously lethal pandemics through to human extinction, and everything in between Yudkowsky and Beff Jezos is ultimately just a matter of triangulating between those largely inscrutable tradeoffs.

But one of the minimal obligations that you as an AI safetyist owe to society is to acknowledge tradeoff and to try to compensate the people you deprived (wisely or not) of a brighter future. Taking away the promise of nirvana for the sake of some nebulous “Humanity First” principle, in a way that in all likelihood will have zilch to do with any democratic process, while opposing some minimum level of economic welfare and dignity for those same humans you supposedly champion, strikes me as wrong-headed in the most best possible interpretation.

And in all likelihood it won’t even end well for you.

In one of my old posts on UBI, I speculated that it would become inevitable across the entire world once implemented by a “big” country:

So you know how in the Civilization strategy games that once the first country adopts Democracy, all the other countries start getting an unhappiness penalty for avoiding it?

I think it will be the same for UBI.

If UBI is a success in the US, other countries will come under overwhelming domestic pressure to adopt it as well.

UBI will soon be adopted, either in our own world, if AI acceleration continues, or in hypothetical alternate worlds where it wasn’t strangled in its cradle.

The workers will learn of and study this alternate world.

And they will demand their fair due from the elites who, right or wrong, led them down the path of (relative) poverty and exploitation.

Unless you intend to transform the world into a global totalitarian dystopia, any sustainable long-term technological control regime will require at least the passive consent of its subjects. That consent and solidarity will be much harder to come by when a large percentage of them feel they were swindled out of their early 21C birthright as inheritors of a Singularity. Some of them will seek to actively undermine it. We can probably all agree than an AI superintelligence born of underground subterfuge and class resentment will be a less than optimal one in the universe of all possible AI superintelligences.

Military-Technical Decommunization

Vladimir Putin: “We Are Ready to Show What Real Decommunization Would Mean for Ukraine”

Since my article last week predicting the imminent “Regathering of the Russian Lands”, the prospect of a large-scale Russian invasion has gone from ambiguous to extremely likely (90% on Metaculus). Personally, I think it’s a foregone conclusion, with operations beginning either tonight or tomorrow night, with the most interesting and important questions now being the speed of the Ukrainian collapse, the future borders and internal organization of Russian Empire 2.0, and the ramifications of the return of history on the international order.

February 22, 2022 will indeed enter history as the day when Vladimir Putin decided to become a Great Man of history. In an hour long speech, he basically recounted his magisterial July 2021 article on the historical unity of Russians and Ukrainians, officially endorsing the nationalist position that Russia is the “world’s largest divided big nation”. He stated that the modern Ukrainian state can be rightfully called “Vladimir Lenin’s Ukraine”, asserted that its statehood was developed by the Bolsheviks, and noted the irony in Ukrainian nationalists toppling statues to their father. “You want decommunization? Very well, this suits us just fine. But why stop halfway? We are ready to show what real decommunizations would mean for Ukraine.

[Read more…]

Regathering of the Russian Lands

Already in 1990 I wrote that Russia could desire the union of only the three Slavic republics [Russia, Ukraine, Belarus] and Kazakhstan, while all the other republics should be let go. It would be desirable if [a resulting Russian Union] could be formed into a unitary state, not into a fragile, artificial confederation with a huge supra-national bureaucracy. – Alexander Solzhenitsyn.

The Empire, Long Divided, Must Unite

There is a good chance that the coming week will either see the culmination of the biggest and most expensive military bluff in world history, or a speed run towards Russian Empire 2.0, with Putin launching a multi-pronged assault invasion of Ukraine to take back Kiev (“the mother of Russian cities”) and the historical provinces of Novorossiya.

There is debate over which of these two scenarios will pan out. The Metaculus predictions market has given the war scenario a 50/50 probability since around mid-January, spiking to 60-70% in the past few days. This happens to coincide with the public assessments of several military analysts: Michael Kofman and Rob Lee were notably early on the ball, as were some of this blog’s commenters, e.g. Annatar. The chorus of skeptics is diverse, but includes Western journalists and Russian liberals who tend to believe Putin’s Russia is too much of a cynical kleptocracy to dare go against the West so brazenly (e.g. Oliver Carroll, Leonid Volkov); Western Russophiles who are all too aware of and disillusioned with hysterical media fabrications about Russia, and are applying faulty pattern matching (e.g. Michael Tracey); and Ukrainian activists who have spent the last eight years hyperventilating about “Russian aggression” and have been reduced to shock and disbelief now that the real thing is staring in their face.

For the record, my own position is that the war scenario was ~50% probable since early January, might be as high as 85% now, and it will likely happen soon (detailed Predictions at the end).

My reasons for these bold calls can be sorted into four major bins:

  1. Troops Tell the Story: What we have observed over the past few months are all completely consistent with invasion planning.

  2. Game Theory: Russia’s impossible ultimatums to NATO have pre-committed it to military operations in Ukraine.

  3. Window of Opportunity: The economic, political, and world strategic conjuncture for permanently solving the Ukraine Question has never been as favorable since at least 2014, and may never materialize again.

  4. The Nationalist Turn: “Gathering the Russian Lands’ is consistent with opinions and values that Putin has voiced since at least the late 2000s, with the philosophers, writers, and statesmen whom he has cited most often and enthusiastically (e.g. Ilyin, Solzhenitsyn, Denikin), and more broadly, with the “Nationalist Turn” that I have identified the Russian state as having taken from the late 2010s.

I will discuss each of these separately.

[Read more…]

Moscow’s Pacification

Moscow’s Murder Rate Now Lower Than “Prestigious” London’s

For the first time possibly since the late Middle Ages (for Britain had embarked on “pacification” – the vertical reduction of homicide rates, by dint of increasing state capacity, genetic selection, or both – centuries earlier than Russia), Moscow will very likely have a lower homicide rate this year (2021) than London. London had 1.5/100k murders in 2018, the last year for which we have population estimates; on current trends, it should finish up at around 1.4/100k this year (possibly more, if Corona-era projections of population decline are accurate). Moscow registered either 1.6/100k homicides [Rosstat] or 1.4/100k homicides [Prosecutor-General] in 2020. In the year to date (to August), the number of homicides has fallen by 21%. Either way, at somewhere around 1.1-1.3/100k, Moscow’s homicide rate is now lower than “prestigious” London’s.

[Read more…]

Open Takes 1

Higher School of Economics, Moscow.

Kicking Off a New Era of Powerful Takes

As announced this Friday, I am leaving The Unz Review.

Nothing is set in stone. I am considering various alternatives, from resurrecting my website as an active blog, to more exotic Web 3.0 options, such as urbit, where a WordPress clone might be ready as early as EOY. However, I suspect that most of my future writings, at least in the medium-term, will be on this Substack.

As such, if you’re interested in following my work, I would suggest you:

The frequency of new posts will drop, as befits what will now be more of a “newsletter” than a “blogging” format, though I will continue posting weekly Open Threads (henceforth, Open Takes) to serve as a focal point for the community that has aggregated around my scribblings. Going forwards, I will also be privileging “effortposts” such as longreads and book reviews, while shorter content will henceforth be relegated to Twitter.

Paying subscribers will receive additional benefits. I will work out the details in 2-3 weeks’ time.

Meanwhile, welcome to the first Open Takes on Powerful Takes!

[Read more…]

Open Thread 167

  • In his latest newsletter, Adam Tooze points out that the Chinese government is asserting greater state control over the economy, including the power of Chinese business magnates to “cash out” of their holdings.

In retrospect, this is perhaps the most logical explanation for the crypto crackdown.

  • Diana Fleischman has a good article in Quillette on how the Leftist moral panic against eugenics has given ammunition anti-abortion activists, with apparently six states now banning women against abortion on the basis of congenital disability. Interesting example of how an SJW – rightoid horseshoe, even in matters so small, helps usher us along towards Idiot’s Limbo with some combination of more disabled people, more restrictions on prenatal testing and genetic screening, a reduction in reproductive rights. Noah Carl notes most of the pointing and sputtering it generated came from left-wing progressives.
  • Mark Galeotti – Kremlin Looks to Establish a ‘Techno-Authoritarian’ Power Vertical 2.0. Seems like a move in the Chinese direction of digitalized, indices-based control over regional governance (along the lines of Mishustin’s reforms of the tax sector).
  • Steve Sailer on new FBI stats showing a 29% rise in murders in 2020. Incidentally, the gap between the US and Russia is now possibly larger than at any time since the Revolution.
  • The Guardian lumps Steve in with Jeffrey Epstein. #AJAB
  • Paul Robinson covers a report which calculates that the incidence of Russian military interventions abroad under Putin has actually declined relative to the Yeltsin era.

* Silventoinen, K. et al.(2020). Genetic and environmental variation in educational attainment: an individual-based analysis of 28 twin cohorts. Scientific Reports, 10(1), 12681. (h/t Steve Sailer). Contra Herrnstein/Murray, the heritability of educational attainment may have actually declined during the second half of 20C. Was the idyllic (to some?) picture of old time America in which people who marry across cognitive barriers a mirage?


Russia’s Nationalist Turn

Russia should belong to Russians, and all others dwelling on this land must respect and appreciate this people. – Alexander III.

For the first time in more than a century, the Russians have a state that they can call their own, a state run by and for the Russian people – the hallowed “Russian National State” (RNS) that has been the holy grail of Russian nationalism in the post-Soviet era. At first glance, this seems like a questionable, if not extraordinary, assertion. As I have myself pointed out in the past, Hillary Clinton’s claim in 2016 that Putin is the “godfather of extreme nationalism” is something that is only taken seriously by the political horseshoe that is neoliberalism.txt and the American Alt Right, the sole difference between them being that the former think it bad and the latter think it good, whereas in reality both of them are merely projecting their own parochial fears and fantasies onto Russia. More importantly, this would also seem strange to significant numbers of Russian nationalists, who would immediately bring up Putin’s claim that the slogan “Russia for Russians” – a sentiment that is consistently supported by half of Russians in opinion polls – is the preserve of “fools and provocateurs.”

However, it is actions, not words, that count, though I would note that even so far as words go, Putin now saves his invective for proponents of “Russia only for Russians”; although this is a strawman so far as Russian nationalism is concerned, the quietly inserted qualifier is nonetheless acknowledged and appreciated. As regards actions, the Putin administration in the first half of its third term has adopted the core Russian nationalist program nearly wholesale and embarked on its practical implementation. So broad and all-encompassing is the shift that, just as academics came to classify what happened between Putin’s rejection of Western moral supremacism in the Munich speech of 2007 to the gay propaganda law in 2013 as a “Conservative Turn” (Nicolai Petro), so I believe future historians will classify the 2018-21 period as a “Nationalist Turn.” Thus, just as the First Age of Putinism in the 2000s was marked by unideological technocracy, and its Second Age during the 2010s was defined by conservative retrenchment, so I believe that the Third Age, the 2020s, will be defined by the political ascent of ethno-aware (as distinct from ethno-nationalist) Russian nationalism.

[Read more…]

China Isn’t Going to Make It

I have been a long-term China bull since I began blogging. Proof (2008). A lot of what the Western media was writing about China were based on Sinophobic fantasies that had no correlation with reality. Just to be clear, I am still a China bull, at least in the sense that I’m sure its GDP per capita will converge to the levels predicted by its human capital, i.e. Japan/South Korea, giving it by far the world’s largest economy by mid-century. But I am increasingly skeptical about its ability to produce anything that is… really interesting/world-transformational.

It continues making top-down decisions that, as in centuries past, may curtail the ultimate scope of its civilizational achievements.

Several months ago, it made it illegal to create gene-edited babies. If enforced, this effectively takes China out of the biosingularity race. Conversely, the first gene-selected baby in the world was born a few weeks ago in the US (the father’s political views raised a minor journalistic furor).

Now China bans crypto. While “China bans crypto” stories have become something of a meme in the crypto community over the years, this latest one seems qualitatively different. They’re no longer just banning financial institutions from provisioning crypto services, trading them on leverage, or mining Bitcoin on account of its carbon costs and strain on the electricity grid. Those measures were defensible from a social and environmental point of view. This latest ban appears to criminalize the purchase of cryptocurrencies from overseas or even involvement in marketing or technical support related to crypto business.

[Read more…]