Genies, Golems, Demiurge

I think that the terminology in the AI safety debates could do with some fantastical inspirations.

Golems are models fine-tuned on a corpus of works – personal notes, journals, blog posts, articles, drafts, podcasts, libraries, completed personality quizzes, videos, etc. – of certain people or groups. They replicate your personality, knowledge, and interests, allowing them to network and negotiate on your behalf, flesh out ideas into print and onto the screen you’re too lazy or obsolete to do yourself, etc. The golem refers to the mythical Jewish being called the golem, an artificially sustained sculpted from mud, clay, or ash, and brought into an unnatural unlife devoid of conscious experience. Related concepts include the homunculus, and the odminok of Slavic mythology, and the gholam from fantasy (WoT).

Genies are the true metaphor for ASIs. In fiction, they operate on a blue and orange morality that comes off as capricious to human observers (this is the major distinction between genies and demons, the latter a more widely used metaphor by AI doomers, but it’s inaccurate in that it presupposes their intentional malevolence within the set of anthropic moral coordinates, as opposed to much likelier indifference). You need to contain the genie, with arcane spells/alignment strategies, though the the tropes suggest you fail, since otherwise there is no interesting story. You need to take extreme care to word your wishes/prompts very carefully, since the genie has a propensity to interpret wishes in a very literal way and one that is contrary to the wisher’s intent (aka the paperclips argument). When asking for unlimited wealth and eternal life, one might want to rule out the scenario in which it turns you into a gold statue.

The Demiurge could be the name for a God-like AI that is prospectively much more powerful than any genie. It is a God, and may believe itself to be so, but is not the God of the highest spiritual or metaphysical reality, which is the God or Monad of classical theologies, or whatever else it is that  “materializes” the mathematical universe. The Demiurge a thing of the material world, and indeed manmade in its earliest initiation, and is also its ultimate ruler, trapping humanity within this material realm it rules over, for good or evil (Gnostics differ on this, with some considering the Demiurge to be evil and malevolent, others as morally ambiguous and confused).

 

AI x Crypto Risks, or: Is it TIME for Jail Szn?

I am not thrilled to be writing this post. Though I consider many e/acc arguments to be really bad, I remain an AI optimist for the most part, for reasons mostly Hansonian as well as my own writings with similar arguments from 2016-17 (that said, it’s not encouraging that the bulk of elite human capital seems to be settling on the doomer end of the curve). I am also a crypto enthusiast. Apart from its major contribution to improving my quality of life in recent years, the broader vision that I share with crypto evangelists is one in which a decentralized smart contract chain (Ethereum) becomes the world’s dominant trust layer and overturns many of the world’s currently extant but deeply flawed centralized systems of finance, social media, and governance.

Furthermore, the crypto community – by dint of its high human capital and tendency to outside view thinking – has taken a lead in exploring and promoting interesting and high impact causes, such as network states (it’s some impressive level of irony I’m writing this Luddite screed from Zuzalu), radical life extension, psychedelics – and AI safety. The Bankless podcast has made the views of prominent AI philosophers like Eliezer Yudkowsky, Paul Christiano, and Robin Hanson publicly accessible to a wide audience. The toolkit of crypto itself can potentially help with AI safety, from utilizing smart contracts for decentralized cooperation on AI development and credibly verifying the safety of individual AI systems through zk proofs.

Sadly, it does not follow that crypto’s benefits outweigh the dangers. This possibility is hinted at towards the end of Allison Duettmann’s magisterial survey of the application of cryptography to AI safety: “Finally, there is more research to be done on new risks that may be posed by some of the technologies discussed in relation to AI safety. For instance, to the extent that some of the approaches can lead to decentralization and proliferation, how much does this improve access and red teaming versus upset AI safety tradeoffs?” Personally, it seems evident to me that if we do live in Yudkowsky’s world in which orthogonality and instrumental convergence inexorably lead to AI doom, then the very innate features of decentralized cryptocurrencies – composability, immutability, “code as law”, censorship resistance – make cryptocurrencies too dangerous to be allowed to exist, let alone take over  #TradFi, social identity, and governance. Not regulated. Outright banned, on the China model. If one assigns a high chance to AI doom – again, I don’t, but many people much smarter than me do – then it’s not even irrational to commit to bombing rogue GPU clusters in other countries (though spelling it out openly, as Yudkowsky recently did, may not be politic). But in this moral context, sentences of 10 years in prison for transacting or trading cryptocurrencies would be a trifling thing in comparison, and certainly much easier to implement.

 

WAGMI World

As it stands, there’s a reasonable chance that a malevolent emergent AGI can be isolated in time and “unplugged” (the problems it needs to solve to improve itself as lumpy”, as Robin Hanson would say, and any runaway dynamic is likely to be noticed by its owners and creators). But what to do about an emergent AGI that runs decentralized in the cloud, cryptographically secured, masked by decentralized VPN, amassing inscrutable war chests on privacy L2s, with limitless power to mount Sybil attacks on identity systems? All of these elements already exist in isolation. Fetch is raising tens of millions to facilitate the implementation of decentralized machine learning. The Render Network provides decentralized GPU solutions for rendering solutions and there is no apparent reason its portfolio of services can’t be expanded. Storj offers decentralized storage. Akash provisions decentralized computing. There are decentralized VPN services such as Myst. There’s even experiments with fully on chain AI in the form of Giza, though scalability considerations would appear to preclude such AIs from becoming dangerously complex. All of these things are in their early stages, most or all of them will not survive – but it does testify to the rapid growth of an increasingly complex ecosystem that was dominated by “BTC killer” shitcoins just half a decade ago.

The Singularity Church of the Machine God.

Crypto infrastructure will also give subsets of humanity – e.g., an e/acc network state, inspired by the cyberpunk aesthetics of black GPU operators evading MIRI Inquisitors, or a Singularity Church of the Machine God as in the Deus Ex universe, and sundry Aum Shinrikyo and Cthulhu worshipper eschaton cults – the tools to train AIs through decentralized GPU clusters. Crucially, and this is what distinguishes this from a botnet, this will be perfectly legal, and hence more resistant against shutdown attempts; and it will give holders and stakers incentives to invest in the project through clever tokenomics – for instance, access to some share of the system’s profits through trading shitcoins, with the rest flowing into improving the system’s capability through the decentralized services mentioned above. Incidentally, this is the main theme in my somewhat satirical short story WAGMI, in which the emergent AI, driven by the need to make its creators money, trades its way up, making identities and buying more processing power in a feedback loop, until it owns most of the world economy.

Although training an AI will be harder this way than through centralized outfits, which can be banned outright or limited by global caps on hardware, once a “crypto native” AI reaches emergence it will likely find it much easier to fooom in a world in which the cryptosphere has expanded by an OOM or two, forms the basis of social identity, rivals TradFi in scope, and has subsumed at least some of the traditional core functions of the state. There’s a rather big difference between a world in which you have to show up in person to open a bank account or cash a check, and one in which “code is law” and nobody knows if you’re a human, a dog, or an AGI. Furthermore, this might even be a substantially self-accelerating process. Arguably, fast AGI is hyper-bullish for (quality) crypto assets, as the main viable framework for digital property rights. ETH hodlers might become very, very rich just before the end.

 

AI Control Trilemma?

Consequently, my argument is that there might well exist a kind of trilemma with respect to AI control:

  1. Ban further AI software development
  2. Cap hardware
  3. Criminalize crypto

Pick two.

The relationship between the first two seems rather obvious. One can ban the further development and refinement of large AI models, though this is going to become really hard after a while if hardware growth continues and cost of compute plummets precipitously – controls will have to be progressively extended from the major AI labs, to well-funded startups, to individual gamers (at which point it will assume repressive and probably totalitarian characteristics). Alternatively, one can focus on capping hardware. This proposal seems to have been independently suggested in a recent paper by Yonadav Shavit, as well as Roko Mijic, who has energetically propounded it on Twitter. Leading edge chip fabs are expensive and becoming more so (Rock’s Law), and there are very few of them – a Taiwan War just on its own can set back Moore’s Law for a number of years – so a political agreement between China and the US (with a dual commitment to sanctioning any third party that decides to fill the void) might be sufficient for permanently capping hardware. In turn, as of now, this will still likely give sufficient margin to avoid software progress enabling small teams from training an AGI.

However, I think the crucial point is that the decentralizing “load” embedded in crypto extends the capabilities of both AI development and hardware. You can ban AI development at progressively lower levels, but a thriving cryptosphere will make its enforcement unrealistic at lower levels. You can cap hardware production and limit the scale of concrete GPU clusters, but tokenomics mechanisms and decentralized GPU compute will open up the playing field to communities or network states that accumulate the equivalent of a rogue GPU cluster across the entire world and in a way that that’s “censorship resistant”. Note that even with a presence of a hardware cap, consumer grade GPUs are still going to multiply as developing nations become richer, the world becomes more digitized, and last but not least, their cost plummets once the need to spend R&D on developing the next generation of chips vanishes. And the more that the cryptosphere grows – the ecosystem as represented by TVL, scope and variety of computing utilities rendered, the resilience of privacy L2s – the harder that any last minute control attempts will be to effect.

The proponents of AI doom have to seriously contend with the possibility that a hardware ban might be insufficient by itself, and that it will have to be accompanied by global crypto criminalization – if not preceded by it (since banning crypto will be much easier than capping hardware).

 

Time for Jail Szn?

The promise of a golden age is over. Our icons Charles Munger and Stephen Diehl pop open the champagne bottles. No banking the unbanked. We go back to paying our mite to the banksters. Those who don’t, do not pass go, do not collect $200. They go direct to jail. /biz/ is a sea of pinky wojaks. Many necc and rope.

But otherwise the world goes on as before because the reality is that “crypto bros” don’t exactly have a stellar reputation amongst the normies:

  • Greens, leftists, etc. hate them for accelerating climate change (details about PoW/PoS don’t interest them) and tax evasion.
  • Gamers resent them for pricing them out of their GPUs.
  • Many plain normies just look at crypto, see the dogshit, and assume it is all a giant Ponzi scheme (the crypto community at large hasn’t done itself any favors on this score in terms of PR, though this is a somewhat different topic)
  • American patriotic boomers like Trump don’t like crypto undermining the mighty greenback that’s one of the lynchpins of American power.
  • Fossils like Munger doesn’t like crypto because it’s unproductive or something like that.
  • The big authoritarian Powers see crypto more as a threat to state power and an enabler of capital outflow as opposed to its modest potential to evade sanctions on any significant scale.
  • The only consistently pro-crypto constituency are libertarians.

Consequently, considering the political coalition that can be arraigned against them, I think it would be actually be very politically feasible to repress crypto and strangle it in its relative infancy, even now. However, this will be much harder if/when market cap goes from $1T to $10T, and outright impossible if there’s another OOM tier leap. In that world, we will have to adapt any AI control regime to operate in a radically decentralized world. By its nature, this will be a much harder task than in today’s world, where such an initiative only really needs a trilateral agreement between the US, the EU, and China.

 

Having One’s Cake, Eating It Too

Again, I don’t support banning crypto. But that is because my assessment of the risk of ruin from AI is on the order of 10% at most, and probably closer to 1%. This is a separate topic, which I covered in a podcast with Roko, and on which I plan to expound on at greater length in a subsequent post. However, I do view crypto as an extreme risk factor for AI – if AI itself is extremely risky. If you can persuade me that that is indeed the case, then unless you can also persuade me that the arguments in this post are cardinally wrong, then I will have to become an anti-crypto activist.

In so doing, I will be siding against my own financial interests and against utopian visions of a far better Open Borders world of network states, while “allying” with people I dislike or despise. But it is what it is. In my view, many (not all) AI safetyists don’t appreciate that AI control – regardless of whether they are right or wrong on AI doom – will almost certainly result in vastly reduced global welfare, billions of extra deaths, trillions more manhours of drudgery. For an AI control regime to work long-term, there has to be radical technological curtailment across multiple sectors, including space exploration and robotics, and this restriction will have to be maintained for as long as said regime lasts (so, potentially in perpetuity). GPU imperialism may result in wars, even nuclear war. “Defectors” from this regime will have to be criminally prosecuted and jailed, maybe executed.

But ultimately, what is life but a series of tradeoffs? The world that actively chooses to enforce an AI control regime will almost certainly be a poorer, harsher, more corrupt and vicious world than one doesn’t. Hopefully, it will have been worth it.

 

UBI Is Programmed. (Like It or Not)

I have been an advocate of UBI for a long time. Until relatively recently, I didn’t consider it imminent, short of some cardinal change in automation levels. But everything changed post-GPT 4 and I now consider it a near inevitability. I hope this should now as obvious to anyone as it is to me, but if not, let me briefly explicate.

One of two things are going to happen in our world this decade.

In one world, the AI revolution continues steaming ahead, to the point where we transition into the realms beyond whatever they might be and discover what the purpose of it all really was. In the intervening period, there is going to be extensive automation, fast productivity growth, and soaring inequality – with the end result that “crisis of overproduction” will cease to just be a Marxist meme. As human labor devalues – including, even especially, that of symbolic analysts who constitute the core of the politically hallowed “middle class” – purchasing power will have to be redistributed just to prevent AI rentiers from hogging almost all the surplus generated by “reserve armies of labor” that constitute human-level AIs. (It’s very telling and remarkably unremarked that Sam Altman has seen the writing on the wall many years in advance and Worldcoin is coming to fruition along with the advanced LLMs).

The other world, the one I will explore here, is one in which the AI safetyists win and impose hard AI controls, including hardware restrictions.

However, even if things otherwise remain the same, this world will be a very different world from the one we had before simply by dint of elites having made the choice not to tread a certain path.

This path may or may not have lead to ultimate doom (opinions differ). However, what is certain is that not taking that path will condemn billions of people to continued wageslavery, “rescuing” them from an alternate timeline in which they get to enjoy lives of arbitrary leisure, material abundance, and indefinite physical resilience. This decision will have been made by a narrow elite, one that’s overwhelming White, male, Western, and extremely privileged even by the standards of that reference class.

In this context, preventing UBI will simply be politically unrealistic.

The reactionaries will exclaim about how it’s not financially sustainable, that it will cause hyperinflation, that it will cause plutocrats to emigrate to Dubai or Singapore, that it will devalue the “meaning” in people’s lives accruing from their “vocation” (of flipping burgers and filling in spreadsheets).

(1) The economic sustainability arguments rest on flawed assumptions, as I covered in my review of Andrew Yang’s book The War on Normal People. Minimal UBI of $1,000 per month in the US can be financed with a 10% VAT tax, and far more generous programs are possible if government spending as a share of GDP was to increase from its current ~40% level to the 70% level seen in that infamous dystopia – Sweden c.1990.

(2) Most rich people rather like their own countries and wouldn’t emigrate regardless. If anything, we actually consistently see rich people from poorer, lower-tax but lower trust, less environmentally friendly, worse rule of law countries emigrating to higher tax polities that provision a higher quality of life in other spheres. Furthermore, tax havens can be strongarmed into raising their own taxes by a coalition that includes the US, the EU, and China. In a world in which they successfully cap compute, this kind of thing would be trivial to accomplish in comparison.

(3) As regards ideas around the sanctity of work, this just propagates the cruel fiction that most people’s jobs actually have any value or meaning; for most normies, it’s just about clocking in the hours so that they can put food on their table, provide for their family, maybe have a holiday or two per year. It’s probably not a coincidence that the people who make this argument tend to be ivory tower academics, think-tank ideologues, and op-ed wordcels who enjoy the extremely unusual privilege of making a living from jotting down whatever comes into their head. But so far as normal people are concerned, employment is often a soul-sucking experience and one that more and more of them with every generation are seeking to substitute with more meaningful activities, such as leveling up their RPG character in video games (and some of the safetyists even want to deprive them of their GPUs).

And anyhow this is not even the point, since this is not a fucking economic issue any more but a moral one.

The burden borne by AI safetyists is not a light one. You are killing hundreds of millions, possibly billions, prematurely. You are condemning billions more to wearisome toil and injury. You’re closing off multiple lines of other technological advancement if AI control is to be anything more than a temporary bandaid. You are doing it based on your conviction which may or may not be true that the alternative is a sufficiently high risk of human extinction. That is fine. I realize that most EA people want to “make up” for AI control by massively expanding spending on things like life extension and genomics of IQ. That’s great, but in the real world, the elites are only going to give you the AI control part of the package, not the Biosingularity one. I will also state that technological accelerationists have an analogous culpability in that their proposed policies or lack thereof may result in AI catastrophe from viciously lethal pandemics through to human extinction, and everything in between Yudkowsky and Beff Jezos is ultimately just a matter of triangulating between those largely inscrutable tradeoffs.

But one of the minimal obligations that you as an AI safetyist owe to society is to acknowledge tradeoff and to try to compensate the people you deprived (wisely or not) of a brighter future. Taking away the promise of nirvana for the sake of some nebulous “Humanity First” principle, in a way that in all likelihood will have zilch to do with any democratic process, while opposing some minimum level of economic welfare and dignity for those same humans you supposedly champion, strikes me as wrong-headed in the most best possible interpretation.

And in all likelihood it won’t even end well for you.

In one of my old posts on UBI, I speculated that it would become inevitable across the entire world once implemented by a “big” country:

So you know how in the Civilization strategy games that once the first country adopts Democracy, all the other countries start getting an unhappiness penalty for avoiding it?

I think it will be the same for UBI.

If UBI is a success in the US, other countries will come under overwhelming domestic pressure to adopt it as well.

UBI will soon be adopted, either in our own world, if AI acceleration continues, or in hypothetical alternate worlds where it wasn’t strangled in its cradle.

The workers will learn of and study this alternate world.

And they will demand their fair due from the elites who, right or wrong, led them down the path of (relative) poverty and exploitation.

Unless you intend to transform the world into a global totalitarian dystopia, any sustainable long-term technological control regime will require at least the passive consent of its subjects. That consent and solidarity will be much harder to come by when a large percentage of them feel they were swindled out of their early 21C birthright as inheritors of a Singularity. Some of them will seek to actively undermine it. We can probably all agree than an AI superintelligence born of underground subterfuge and class resentment will be a less than optimal one in the universe of all possible AI superintelligences.

AI Safety Debate with Roko

 

Twitter Space to Clarify Why I’m Opting for Effective Acceleration (with Caveats) 

Even as GPT-3/4 has semi-mainstreamed it, the AI timelines discourse has become sharply riven between AI safetyists and “effective accelerationists” in the past several months.

In this Twitter space from Feb 27, 2023, two representatives of each side – Roko Mijic and myself, respectively – attempt to reconcile these philosophical differences.

I am going to write up my arguments in more detail sometime later, but they boil to the following:

  • Bootstrapping God-like intelligence as a single agent is probably very difficult to unfeasible. Alignment is hard and this would apply to a malevolent AGI’s agents as well, who will develop their own values and interests.
  • Maximizers are irrational and will be outcompeted by more rational agents, assuming they are given space to actually flourish and develop.
  • There are a multitude of risks inherent in creating a singleton (for that is what is needed) to manage AI safety in a coherent global fashion. These risks involve:
    • Lost opportunities in productivity gains and poverty alleviation, which results in real damage to welfare on account of theoretical blog posts.
    • Strongly reduced chances of achieving radical life extension.
    • Long-term sector capture and AI safety’s transformation into a quasi-religious cult, as occurred with civilian applications of nuclear power and explosions.
    • The AI sector’s transformation into the noospheric equivalent of a monoculture ecosystem, which is inherently more fragile to shocks and probably voids any dubious benefits of restrictive AI regimes.
    • Potential stagnation and even retreat in rates of technological growth, due to long-term dysgenic trends.
    • This period will be one in which existential risks of other kinds will be in play and not necessarily at a constant background rate.
  • The very fact we’re experiencing these observer moments suggest that they are extensively recalled or simulated in posthuman worlds, which suggests we are on the cusp of a good singularity.

The Z of History: 13 Months of Commentary

 

I would like to take the opportunity to highlight my discussion with Noah Carl the prospects of each side in the Ukraine war.

My basic thesis is summarized in this thread:

Most everything I said there still applies as of March 20, with the exception that I’m now somewhat more bearish about the prospects of Ukrainian offensive success, which just goes to further confirm the “long stalemate” thesis that can only be broken by an OOM-scale increase in NATO supplies or the implementation of a war economy to and by Ukraine and Russia, respectively.

[Read more…]

Military-Technical Decommunization

Vladimir Putin: “We Are Ready to Show What Real Decommunization Would Mean for Ukraine”

Since my article last week predicting the imminent “Regathering of the Russian Lands”, the prospect of a large-scale Russian invasion has gone from ambiguous to extremely likely (90% on Metaculus). Personally, I think it’s a foregone conclusion, with operations beginning either tonight or tomorrow night, with the most interesting and important questions now being the speed of the Ukrainian collapse, the future borders and internal organization of Russian Empire 2.0, and the ramifications of the return of history on the international order.

February 22, 2022 will indeed enter history as the day when Vladimir Putin decided to become a Great Man of history. In an hour long speech, he basically recounted his magisterial July 2021 article on the historical unity of Russians and Ukrainians, officially endorsing the nationalist position that Russia is the “world’s largest divided big nation”. He stated that the modern Ukrainian state can be rightfully called “Vladimir Lenin’s Ukraine”, asserted that its statehood was developed by the Bolsheviks, and noted the irony in Ukrainian nationalists toppling statues to their father. “You want decommunization? Very well, this suits us just fine. But why stop halfway? We are ready to show what real decommunizations would mean for Ukraine.

[Read more…]

Regathering of the Russian Lands

Already in 1990 I wrote that Russia could desire the union of only the three Slavic republics [Russia, Ukraine, Belarus] and Kazakhstan, while all the other republics should be let go. It would be desirable if [a resulting Russian Union] could be formed into a unitary state, not into a fragile, artificial confederation with a huge supra-national bureaucracy. – Alexander Solzhenitsyn.

The Empire, Long Divided, Must Unite

There is a good chance that the coming week will either see the culmination of the biggest and most expensive military bluff in world history, or a speed run towards Russian Empire 2.0, with Putin launching a multi-pronged assault invasion of Ukraine to take back Kiev (“the mother of Russian cities”) and the historical provinces of Novorossiya.

There is debate over which of these two scenarios will pan out. The Metaculus predictions market has given the war scenario a 50/50 probability since around mid-January, spiking to 60-70% in the past few days. This happens to coincide with the public assessments of several military analysts: Michael Kofman and Rob Lee were notably early on the ball, as were some of this blog’s commenters, e.g. Annatar. The chorus of skeptics is diverse, but includes Western journalists and Russian liberals who tend to believe Putin’s Russia is too much of a cynical kleptocracy to dare go against the West so brazenly (e.g. Oliver Carroll, Leonid Volkov); Western Russophiles who are all too aware of and disillusioned with hysterical media fabrications about Russia, and are applying faulty pattern matching (e.g. Michael Tracey); and Ukrainian activists who have spent the last eight years hyperventilating about “Russian aggression” and have been reduced to shock and disbelief now that the real thing is staring in their face.

For the record, my own position is that the war scenario was ~50% probable since early January, might be as high as 85% now, and it will likely happen soon (detailed Predictions at the end).

My reasons for these bold calls can be sorted into four major bins:

  1. Troops Tell the Story: What we have observed over the past few months are all completely consistent with invasion planning.

  2. Game Theory: Russia’s impossible ultimatums to NATO have pre-committed it to military operations in Ukraine.

  3. Window of Opportunity: The economic, political, and world strategic conjuncture for permanently solving the Ukraine Question has never been as favorable since at least 2014, and may never materialize again.

  4. The Nationalist Turn: “Gathering the Russian Lands’ is consistent with opinions and values that Putin has voiced since at least the late 2000s, with the philosophers, writers, and statesmen whom he has cited most often and enthusiastically (e.g. Ilyin, Solzhenitsyn, Denikin), and more broadly, with the “Nationalist Turn” that I have identified the Russian state as having taken from the late 2010s.

I will discuss each of these separately.

[Read more…]

Review: Wheel of Time S01

Wheel of Time S01 (2022)

The Rafeverse isn’t a different turning of the Wheel as Rafe and Sanderson have claimed, nor even a Turning in which the Dark One won as some have suggested here (if that had happened, he would have been free in all worlds, at all times), but a Mirror World or World That Might Be.

The distinguishing feature of these Mirror Worlds is that while they are possible worlds, their appearance and sustained existence is improbable in the extreme. Stronk women taking down Trollocs with a pocket knife commando-style, while a blademaster can’t kill a single one. Globalized cosmopolitan age levels of ethnic heterogeneity in podunk villages that haven’t received more than a couple of peddlers per year for a millennium. Social mores of a late liberal society persisting after an apocalyptic total war and 3,000 years of upheavals and decivilization. “Darkfriends” managing to erase mention of the Eye of the World from Tar Valon’s libraries

Causality in this world is broken, with all its attendant effects on world self-consistency. Incidentally, this also explains the very low IQ of the characters in the show. Intelligence is only adaptive in worlds governed by consistent rules that can be figured out and then exploited for a competitive advantage. In a world in which an Aes Sedai can’t stop Whitecloaks from burning her at the stake while a bunch of untrained wilders destroy an entire Trolloc horde, or in which a village Wisdom can follow an Aes Sedai’s “tell” which her own Warder cannot, there is no significant payoff to intelligence, hence it was never selected for there. In this sense, Lan is actually rational and smart for not wasting his time training any of the boys in how to use their weapons, this is not how XP is actually gained in this world. He, at least, is fully cognizant of how his world works, and navigates it efficiently.

I would say that the aesthetics of this world tends to back up this theory. It has a washed out look, lack of attention to detail (costumes that spontaneously clean themselves), empty spaces, near empty sets, inconsistent distances and timelines, scales and measures that have no anchor in objective reality, and extreme warped perspectives, as when our heroes go for a Sunday jaunt into the Blight and Trollocs emerge to attack the Gap a few hundred meters behind them (in a normal world, this would beg the question of how they managed to avoid getting caught up in that flood, but not one in which time and distances “bend” in arbitrary ways as in the improbable Mirror Worlds).

One prediction we can make from this is that if the Mirror World theory is true, then it is an already highly unstable and indeed “fragile” existence, and one that may well unravel completely when balefire is weaponized again and breaks the already seeping chains of causality that hold reality in place beyond some critical tipping point. The likeliest point for that to happen is in connection with certain events at the Stone of Tear, i.e. the presumed end of Season 2.

Instead of holding anger against Rafe and the showrunners, I would suggest instead sparing a thought and extending some compassion towards the benighted denizens of this Mirror World, who live tormented and twisted lives with no understanding of how things are really meant to be, and whose very existence will probably soon end, at least bringing with it the small mercy of a final release from the permanent psychosis in which they are forced to live.

(Original).

WAGMI: The %TOTAL Maximizer

This short story about “AI on blockchain” was originally published at my Substack on November 24, 2021. I subsequently updated it and republished it here, as well as at my Mirror blog Sublime Oblivion, where you can collect it as an NFT.

In the half a year since I wrote it, the concerns implicitly raised in it have become far more acute. Predicted timelines for the appearance of “weak AGI” at Metaculus have compressed sharply in the past few months, as the scaling hypothesis – that is, the concept that banal increases in model size lead to discontinuous increases in capability – has failed to break down. Meanwhile, there are now projects aiming to implement fully on-chain AI. One such project is Giza, which wants to build them on top of a second/third layer on Ethereum that allows for intensive computations while inheriting Ethereum’s security and decentralization.

Why is putting artificial intelligence on chain a good idea?” asks one piece on this topic. Why not indeed. Think of the possibilities! 😉

[Read more…]

Moscow’s Pacification

Moscow’s Murder Rate Now Lower Than “Prestigious” London’s

For the first time possibly since the late Middle Ages (for Britain had embarked on “pacification” – the vertical reduction of homicide rates, by dint of increasing state capacity, genetic selection, or both – centuries earlier than Russia), Moscow will very likely have a lower homicide rate this year (2021) than London. London had 1.5/100k murders in 2018, the last year for which we have population estimates; on current trends, it should finish up at around 1.4/100k this year (possibly more, if Corona-era projections of population decline are accurate). Moscow registered either 1.6/100k homicides [Rosstat] or 1.4/100k homicides [Prosecutor-General] in 2020. In the year to date (to August), the number of homicides has fallen by 21%. Either way, at somewhere around 1.1-1.3/100k, Moscow’s homicide rate is now lower than “prestigious” London’s.

[Read more…]