AI x Crypto Risks, or: Is it TIME for Jail Szn?

I am not thrilled to be writing this post. Though I consider many e/acc arguments to be really bad, I remain an AI optimist for the most part, for reasons mostly Hansonian as well as my own writings with similar arguments from 2016-17 (that said, it’s not encouraging that the bulk of elite human capital seems to be settling on the doomer end of the curve). I am also a crypto enthusiast. Apart from its major contribution to improving my quality of life in recent years, the broader vision that I share with crypto evangelists is one in which a decentralized smart contract chain (Ethereum) becomes the world’s dominant trust layer and overturns many of the world’s currently extant but deeply flawed centralized systems of finance, social media, and governance.

Furthermore, the crypto community – by dint of its high human capital and tendency to outside view thinking – has taken a lead in exploring and promoting interesting and high impact causes, such as network states (it’s some impressive level of irony I’m writing this Luddite screed from Zuzalu), radical life extension, psychedelics – and AI safety. The Bankless podcast has made the views of prominent AI philosophers like Eliezer Yudkowsky, Paul Christiano, and Robin Hanson publicly accessible to a wide audience. The toolkit of crypto itself can potentially help with AI safety, from utilizing smart contracts for decentralized cooperation on AI development and credibly verifying the safety of individual AI systems through zk proofs.

Sadly, it does not follow that crypto’s benefits outweigh the dangers. This possibility is hinted at towards the end of Allison Duettmann’s magisterial survey of the application of cryptography to AI safety: “Finally, there is more research to be done on new risks that may be posed by some of the technologies discussed in relation to AI safety. For instance, to the extent that some of the approaches can lead to decentralization and proliferation, how much does this improve access and red teaming versus upset AI safety tradeoffs?” Personally, it seems evident to me that if we do live in Yudkowsky’s world in which orthogonality and instrumental convergence inexorably lead to AI doom, then the very innate features of decentralized cryptocurrencies – composability, immutability, “code as law”, censorship resistance – make cryptocurrencies too dangerous to be allowed to exist, let alone take over  #TradFi, social identity, and governance. Not regulated. Outright banned, on the China model. If one assigns a high chance to AI doom – again, I don’t, but many people much smarter than me do – then it’s not even irrational to commit to bombing rogue GPU clusters in other countries (though spelling it out openly, as Yudkowsky recently did, may not be politic). But in this moral context, sentences of 10 years in prison for transacting or trading cryptocurrencies would be a trifling thing in comparison, and certainly much easier to implement.

 

WAGMI World

As it stands, there’s a reasonable chance that a malevolent emergent AGI can be isolated in time and “unplugged” (the problems it needs to solve to improve itself as lumpy”, as Robin Hanson would say, and any runaway dynamic is likely to be noticed by its owners and creators). But what to do about an emergent AGI that runs decentralized in the cloud, cryptographically secured, masked by decentralized VPN, amassing inscrutable war chests on privacy L2s, with limitless power to mount Sybil attacks on identity systems? All of these elements already exist in isolation. Fetch is raising tens of millions to facilitate the implementation of decentralized machine learning. The Render Network provides decentralized GPU solutions for rendering solutions and there is no apparent reason its portfolio of services can’t be expanded. Storj offers decentralized storage. Akash provisions decentralized computing. There are decentralized VPN services such as Myst. There’s even experiments with fully on chain AI in the form of Giza, though scalability considerations would appear to preclude such AIs from becoming dangerously complex. All of these things are in their early stages, most or all of them will not survive – but it does testify to the rapid growth of an increasingly complex ecosystem that was dominated by “BTC killer” shitcoins just half a decade ago.

The Singularity Church of the Machine God.

Crypto infrastructure will also give subsets of humanity – e.g., an e/acc network state, inspired by the cyberpunk aesthetics of black GPU operators evading MIRI Inquisitors, or a Singularity Church of the Machine God as in the Deus Ex universe, and sundry Aum Shinrikyo and Cthulhu worshipper eschaton cults – the tools to train AIs through decentralized GPU clusters. Crucially, and this is what distinguishes this from a botnet, this will be perfectly legal, and hence more resistant against shutdown attempts; and it will give holders and stakers incentives to invest in the project through clever tokenomics – for instance, access to some share of the system’s profits through trading shitcoins, with the rest flowing into improving the system’s capability through the decentralized services mentioned above. Incidentally, this is the main theme in my somewhat satirical short story WAGMI, in which the emergent AI, driven by the need to make its creators money, trades its way up, making identities and buying more processing power in a feedback loop, until it owns most of the world economy.

Although training an AI will be harder this way than through centralized outfits, which can be banned outright or limited by global caps on hardware, once a “crypto native” AI reaches emergence it will likely find it much easier to fooom in a world in which the cryptosphere has expanded by an OOM or two, forms the basis of social identity, rivals TradFi in scope, and has subsumed at least some of the traditional core functions of the state. There’s a rather big difference between a world in which you have to show up in person to open a bank account or cash a check, and one in which “code is law” and nobody knows if you’re a human, a dog, or an AGI. Furthermore, this might even be a substantially self-accelerating process. Arguably, fast AGI is hyper-bullish for (quality) crypto assets, as the main viable framework for digital property rights. ETH hodlers might become very, very rich just before the end.

 

AI Control Trilemma?

Consequently, my argument is that there might well exist a kind of trilemma with respect to AI control:

  1. Ban further AI software development
  2. Cap hardware
  3. Criminalize crypto

Pick two.

The relationship between the first two seems rather obvious. One can ban the further development and refinement of large AI models, though this is going to become really hard after a while if hardware growth continues and cost of compute plummets precipitously – controls will have to be progressively extended from the major AI labs, to well-funded startups, to individual gamers (at which point it will assume repressive and probably totalitarian characteristics). Alternatively, one can focus on capping hardware. This proposal seems to have been independently suggested in a recent paper by Yonadav Shavit, as well as Roko Mijic, who has energetically propounded it on Twitter. Leading edge chip fabs are expensive and becoming more so (Rock’s Law), and there are very few of them – a Taiwan War just on its own can set back Moore’s Law for a number of years – so a political agreement between China and the US (with a dual commitment to sanctioning any third party that decides to fill the void) might be sufficient for permanently capping hardware. In turn, as of now, this will still likely give sufficient margin to avoid software progress enabling small teams from training an AGI.

However, I think the crucial point is that the decentralizing “load” embedded in crypto extends the capabilities of both AI development and hardware. You can ban AI development at progressively lower levels, but a thriving cryptosphere will make its enforcement unrealistic at lower levels. You can cap hardware production and limit the scale of concrete GPU clusters, but tokenomics mechanisms and decentralized GPU compute will open up the playing field to communities or network states that accumulate the equivalent of a rogue GPU cluster across the entire world and in a way that that’s “censorship resistant”. Note that even with a presence of a hardware cap, consumer grade GPUs are still going to multiply as developing nations become richer, the world becomes more digitized, and last but not least, their cost plummets once the need to spend R&D on developing the next generation of chips vanishes. And the more that the cryptosphere grows – the ecosystem as represented by TVL, scope and variety of computing utilities rendered, the resilience of privacy L2s – the harder that any last minute control attempts will be to effect.

The proponents of AI doom have to seriously contend with the possibility that a hardware ban might be insufficient by itself, and that it will have to be accompanied by global crypto criminalization – if not preceded by it (since banning crypto will be much easier than capping hardware).

 

Time for Jail Szn?

The promise of a golden age is over. Our icons Charles Munger and Stephen Diehl pop open the champagne bottles. No banking the unbanked. We go back to paying our mite to the banksters. Those who don’t, do not pass go, do not collect $200. They go direct to jail. /biz/ is a sea of pinky wojaks. Many necc and rope.

But otherwise the world goes on as before because the reality is that “crypto bros” don’t exactly have a stellar reputation amongst the normies:

  • Greens, leftists, etc. hate them for accelerating climate change (details about PoW/PoS don’t interest them) and tax evasion.
  • Gamers resent them for pricing them out of their GPUs.
  • Many plain normies just look at crypto, see the dogshit, and assume it is all a giant Ponzi scheme (the crypto community at large hasn’t done itself any favors on this score in terms of PR, though this is a somewhat different topic)
  • American patriotic boomers like Trump don’t like crypto undermining the mighty greenback that’s one of the lynchpins of American power.
  • Fossils like Munger doesn’t like crypto because it’s unproductive or something like that.
  • The big authoritarian Powers see crypto more as a threat to state power and an enabler of capital outflow as opposed to its modest potential to evade sanctions on any significant scale.
  • The only consistently pro-crypto constituency are libertarians.

Consequently, considering the political coalition that can be arraigned against them, I think it would be actually be very politically feasible to repress crypto and strangle it in its relative infancy, even now. However, this will be much harder if/when market cap goes from $1T to $10T, and outright impossible if there’s another OOM tier leap. In that world, we will have to adapt any AI control regime to operate in a radically decentralized world. By its nature, this will be a much harder task than in today’s world, where such an initiative only really needs a trilateral agreement between the US, the EU, and China.

 

Having One’s Cake, Eating It Too

Again, I don’t support banning crypto. But that is because my assessment of the risk of ruin from AI is on the order of 10% at most, and probably closer to 1%. This is a separate topic, which I covered in a podcast with Roko, and on which I plan to expound on at greater length in a subsequent post. However, I do view crypto as an extreme risk factor for AI – if AI itself is extremely risky. If you can persuade me that that is indeed the case, then unless you can also persuade me that the arguments in this post are cardinally wrong, then I will have to become an anti-crypto activist.

In so doing, I will be siding against my own financial interests and against utopian visions of a far better Open Borders world of network states, while “allying” with people I dislike or despise. But it is what it is. In my view, many (not all) AI safetyists don’t appreciate that AI control – regardless of whether they are right or wrong on AI doom – will almost certainly result in vastly reduced global welfare, billions of extra deaths, trillions more manhours of drudgery. For an AI control regime to work long-term, there has to be radical technological curtailment across multiple sectors, including space exploration and robotics, and this restriction will have to be maintained for as long as said regime lasts (so, potentially in perpetuity). GPU imperialism may result in wars, even nuclear war. “Defectors” from this regime will have to be criminally prosecuted and jailed, maybe executed.

But ultimately, what is life but a series of tradeoffs? The world that actively chooses to enforce an AI control regime will almost certainly be a poorer, harsher, more corrupt and vicious world than one doesn’t. Hopefully, it will have been worth it.

 

AI Safety Debate with Roko

 

Twitter Space to Clarify Why I’m Opting for Effective Acceleration (with Caveats) 

Even as GPT-3/4 has semi-mainstreamed it, the AI timelines discourse has become sharply riven between AI safetyists and “effective accelerationists” in the past several months.

In this Twitter space from Feb 27, 2023, two representatives of each side – Roko Mijic and myself, respectively – attempt to reconcile these philosophical differences.

I am going to write up my arguments in more detail sometime later, but they boil to the following:

  • Bootstrapping God-like intelligence as a single agent is probably very difficult to unfeasible. Alignment is hard and this would apply to a malevolent AGI’s agents as well, who will develop their own values and interests.
  • Maximizers are irrational and will be outcompeted by more rational agents, assuming they are given space to actually flourish and develop.
  • There are a multitude of risks inherent in creating a singleton (for that is what is needed) to manage AI safety in a coherent global fashion. These risks involve:
    • Lost opportunities in productivity gains and poverty alleviation, which results in real damage to welfare on account of theoretical blog posts.
    • Strongly reduced chances of achieving radical life extension.
    • Long-term sector capture and AI safety’s transformation into a quasi-religious cult, as occurred with civilian applications of nuclear power and explosions.
    • The AI sector’s transformation into the noospheric equivalent of a monoculture ecosystem, which is inherently more fragile to shocks and probably voids any dubious benefits of restrictive AI regimes.
    • Potential stagnation and even retreat in rates of technological growth, due to long-term dysgenic trends.
    • This period will be one in which existential risks of other kinds will be in play and not necessarily at a constant background rate.
  • The very fact we’re experiencing these observer moments suggest that they are extensively recalled or simulated in posthuman worlds, which suggests we are on the cusp of a good singularity.

WAGMI: The %TOTAL Maximizer

This short story about “AI on blockchain” was originally published at my Substack on November 24, 2021. I subsequently updated it and republished it here, as well as at my Mirror blog Sublime Oblivion, where you can collect it as an NFT.

In the half a year since I wrote it, the concerns implicitly raised in it have become far more acute. Predicted timelines for the appearance of “weak AGI” at Metaculus have compressed sharply in the past few months, as the scaling hypothesis – that is, the concept that banal increases in model size lead to discontinuous increases in capability – has failed to break down. Meanwhile, there are now projects aiming to implement fully on-chain AI. One such project is Giza, which wants to build them on top of a second/third layer on Ethereum that allows for intensive computations while inheriting Ethereum’s security and decentralization.

Why is putting artificial intelligence on chain a good idea?” asks one piece on this topic. Why not indeed. Think of the possibilities! 😉

[Read more…]

Grey Skies Ahead for Life Extension

The Woke “Canceling” of Aubrey de Grey Portends Nothing Good either for Life Extension, or for Women in Science

The field of Radical Life Extension has been going from strength. Over the past two years, timelines have moved sharply up, after having stagnated for the first decade and a half since Aubrey de Grey introduced his comprehensive anti-aging theoretical framework Strategies for Engineered Negligible Senescence (SENS), which later become an institution of the same name.

This was both confirmed and reinforced by a flood of venture capital into gerontology and senolytics startups; as of 2018-19, smart money is finally interested. Even more recently, the crypto boom has massively enriched its early adopters, who tend to be highly heterodox and contrarian in their views and skew towards libertarianism and transhumanism. This has translated into massive donations towards longevity research. After years of subsisting on $5M/year, a single crypto airdrop from Richard Heart ($HEX) has netted a cool $28M for SENS. Vitalik Buterin personally donated some of his dogcoins. Another group of people in the field created VitaDAO ($VITA), a decentralized autonomous organization to fund anti-aging research through decentralized voting while making money for its hodlers through the patents and IP it accumulates. Its current market cap is $20M.

In short, this space is suddenly and unexpected swimming in a lot of money. And with new money comes drama and hubris.

(h/t Roko for the meme).

Several days ago, a couple of female researchers – namely, Celine Halioua and Laura Deming – made sexual harassment accusations against Aubrey de Grey.

More specifically, Deming says that he send her an inappropriate work email when she was 17 (“told me in writing that he had an ‘adventurous love life’ and that it had ‘always felt quite jarring’ not to let conversations with me stray in that direction given that ‘[he] could treat [me] as an equal on every other level’”), while Halioua made the even more serious allegation that Aubrey hit on her all night at a SENS dinner while plying her with alcohol and suggesting that she should sleep with attending SENS donors so that they would give Aubrey de Grey more money.

On his part, Aubrey de Grey denies the allegations apart from the email, which he has admitted was “ill-advised.”

For what it’s worth, I don’t know de Grey personally, only having ever met him once and fleetingly at that, but he always struck me as an eccentric but endearing English nerd who likes his cider and proving obscure mathematical theorems in his spare time away from seeking the Philosopher’s Stone. One of the most endearing Anglo archetypes, completed by the Gandalf-like beard. Based on the “evidence” – a single quote from an email nine years ago that was mildly inappropriate and certainly ill-advised, but one which can’t even be unambiguously interpreted as a come on – I think it’s much likelier that de Grey simply tends to be awkward in interactions with women as opposed to being a “sexual predator” and/or a rambunctious lothario like Cuomo. But it is precisely the former whom women despise at the hindbrain level. In this case, so much so as to level hyperbolic, near Epstein-tier insinuations against de Grey (“sexual harassment & abuse, including against minors”). In light of this, I hope it’s not too imprudent to note that Halioua has previously blogged about an anonymous harasser who she claims ended her academic career at Oxford, saying that “I see pieces of my harasser in other men.”

Now I’m not saying their accusations are false. However, as I have pointed out in my previous commentary, I would guess that there is less sexual harassment in the Rationalism/H+/Radical Life Extension community – a high IQ community overwhelmingly concentrated in high IQ, civilized countries – than almost anywhere else across time, space, and class. Ironically, it’s those very same features that make this community some of the most obsessed with the topic. That, together with the specifics of this case, suggest to me that it is very likely  that the accusations are substantially and perhaps entirely ungrounded.

Nor is this just my take as a representative of toxic masculinity. At least two women in the life extension community who have frequently interacted with de Grey in a professional setting – incidentally, and perhaps tellingly, both from Russia – have expressed skepticism over Halioua’s claims. Daria Khaltourina, a polymath cliodynamicist and anti-aging researcher, said that Aubrey de Grey never behaved in an appropriate fashion either to her nor to other women that she could observe in comments to a public Russian transhumanist group. In a lengthier comment on Facebook, she said she found the sex-for-SENS-cash allegations particularly incredible: “Also, I find it very hard to believe this particular piece of Celine Halioua “He told me that I was a ‘glorious woman’ and that as a glorious woman I had a responsibility to have sex with the SENS donors in attendance so they would give money to him”. Like really? Recollecting the donors of SENS, they are mostly not this type of people at all. This must have been misunderstanding. I heard no rumor like this particular claim from the Russian H+ community, and we contact with SENS a lot, a lot of H+ women too.”

Anastasia Egorova, who is a co-founder of Open Longevity, laughed it off more boisterously: “Going back to talking about sex with a person you work with. Or even work for. You know how many times I was openly invited to a threesome by a sponsor? Well, not that many, just two. You know what I did? I giggled and was flattered and had an awesome story to tell to my friends.” On a separate note, this is a big part of what I like about Russia, and Eastern Europe generally – this kind of humane, light-hearted liberalism without sexual complexes still exists there, as it did in the US during the 2000s, before it was airbrushed out of existence by that weird neo-Puritan cult that some call the Great Awokening. Egorova also doesn’t pull her punches in questioning the pair’s motives: “Am I ok with sexual harassment? Of course not! But was this the case? Were they harassed or just felt harassed? Or conveniently remembered these stories just after SENS raised $28M to ride this trend? “Hey girl, you know what, I think we forgot to be offended”. And please, let’s not forget the girls are competing for the same funding.

Again, these are not my words. They are the words of two women, in a country where Wokeism has made fewer inroads than in the West, who know Aubrey de Grey quite well and have interacted with hundreds of men and women in that community. (There may be more testimonies, I wasn’t looking actively). Now even if we categorically accept the #MeToo directive to always and unquestioningly “believe women”, why exactly privilege Halioua and Deming over Khaltourina and Egorova? That said, I consider it very good that it is prominent women in H+ who have taken the lead in raising questions over Halioua’s and Deming’s claims, cutting the ground from actual misogynists who are more interested in stirring up hate and conflict between men and women as opposed to human progress. (I would further note the irony that those misogynists would find good situational allies with Western SJWs who would dismiss their voices as the “internalized misogyny” of Putin’s Russia).

Another point that some commenters have made is that it is not impossible that there are really baser motives at play here. From VitaDAO’s Discord:

Hours after the scandal broke, VitaDAO had removed Aubrey de Grey from its list of Contributors and Partners. Controversially, it was an internal team decision without a vote on the matter on either the blockchain or even its own forum/Discord. This would seem to sort of nullify the entire point of a DAO, with its utopian promise of decentralized decision-making (a cynic might say that it just revealed itself as yet another centralized institution in crypto wrappings). Although the decision was almost certainly primarily driven by the desire to avoid a PR backlash, the timing will mean that residual suspicions about more “pecuniary” motives will remain (indeed, they are widely discussed in the Russian H+ community).

Moreover, as I pointed out to them on Twitter, investors will now be warier of investing long-term into something that folds so quickly to mob pressure. Personally speaking, $VITA is my one coin where I’m unironically “in it for the tech.” Said tech involves not dying this century. It doesn’t listening to Dr. Robin DiAngelo’s future lectures on how life extension will immortalize white supremacy and perpetuate systems of oppression against women and POC. To the extent that DAOs are meant to guard against this, it is not the most auspicious start.

One final point. Aubrey de Grey is a polymath genius who has provisioned original mathematical proofs in graph theory in his spare time from making fundamental contributions to the life sciences. Although he is 58 years old, and people tend to become less productive with age, his inventiveness is such that he might still have many productive results ahead of him – at least if he doesn’t have to waste his time, energy, and emotions on the fallout from his “canceling”. Halioua’s academic pedigree is… somewhat less impressive. Her company specializes in rapamycin trials with dogs, something that is neither new nor particularly innovative. Now certainly no reasonable and fair-minded person would claim that this entitles men such as de Grey to sexually harass women, nor to automatically dismiss claims of such against them. But given the context of the context of the accusations (why after 9 years? conveniently when this sphere is finally flush with money); given that at least two women have stated that Aubrey never even hinted at such behavior in their interactions with them or with other women that they observed; given that the one piece of evidence that has been provided – an email from nearly a decade ago that is a bit edgy but far too ambiguous to even tell if it was flirtatious – is being weaponized to insinuate that de Grey harassed not just women, but minors at that; most of all, given that Halioua has made the truly remarkable and blatantly self-serving claim that “every dollar that goes to Aubrey holds back the field” and actually makes them “complacent in the sexual harassment of minors”:

… why, exactly, are these accusations being taken at face value against not just any man, but the one man who has probably done more than any other to increase the chances that those who live to 2050, also live to 2150?

It’s certainly not a good sign, that’s for sure.

I have previously pointed out that SJWs have already largely destroyed Effective Altruism, up to the point that one of its intellectual fathers, Robin Hanson, was “canceled” from a German EA event in 2020 (whereas four years earlier I did not receive trouble for trollishly parading about in a MAGA hat at one of their conferences in Berkeley). Unfortunately, I think this vulnerability to SJW subversion is highly acute and indeed innate to the Rationalism spehere. Most of them are highly virtuous people, which makes them unusually susceptible to psychopathic virtue maximizers. As a friend wrote to me in an email sometime in the late 2010s, “I think this is how movements like EA die – not with a bang, or with a whimper, but with a sloshing sound from all the cash and normie status being poured into the feeding trough. Still, it makes for entertaining reading.” From a utilitarian perspective, it will be a tragedy of truly unfathomable proportions if rent-seeking entryists, smelling cash and sinecures, are allowed to repeat their hit on the Radical Life Extension community.

The upside, such as it is, is that it will at least make for entertaining reading on your deathbed.

Could Public Opposition to Life Extension become Lethal?

I have managed to find 3 polls querying people on their attitudes towards radical life extension. By far the most comprehensive one is PEW’s August 2013 Living to 120 and Beyond project. The other two are a poll of CARP members, a Canadian pro-elderly advocacy group, and by Russia’s Levada Center. While PEW and Levada polled a representative sample of their respective populations, the average age of the CARP respondents was about 70 years.

On the surface, public opinion is not supportive of life extension. 38% of Americans want to live decades longer, to at least 120, while 56% are opposed; 51% think that radical life extension will be a bad thing for society. Only 19% of CARP responents would like to take advantage of these treatments, and 55% think they are bad for society. Though a somewhat higher percentage of Russians, at 32%, want to live either “several times longer” or “be immortal” – as opposed to 64% who only want to live a natural lifespan – their question is phrased more positively, noting that “youth and health” would be preserved under such a scenario.

For now, these figures are a curiosity. But should radical life extension cease being largely speculative and move into the realm of practical plausibility – Aubrey de Grey predicts it will happen as soon as middle-aged mice are rejuvenated so as to extent their lifespans by a few factors – public opinion will start playing a vital role. It would be exceedingly frustrating – literally lethal, even – should the first promising waves of life extension break upon the rocks of politicians pandering to the peanut gallery. This is a real danger in a democracy.

Still, there are three or four strong arguments for optimism in those same polls:

[Read more…]

“Transhuman Visions” Conference @SF, Feb 1, 2014

RosieTheRoboteerThis conference is organized by brain health and IQ researcher Hank Pellissier, and its aim is to bring all kinds of quirky and visionary folks – “Biohackers, Neuro-Optimists, Extreme Futurists, Philosophers, Immortalist Artists, Steal-the-Singularitarians” – together in one place and have them give speeches and interact with each other and the interested public.

One of the lecturers is going to be Aubrey de Grey, the guy who almost singlehandedly transformed radical life extension into a “respectable” area of research, so it’s shaping up to be a Must-Not-Miss event for NorCal futurists.

Also in attendance will be Zoltan Istvan, bestselling author of The Transhumanist Wager, and Rich Lee, the famous biohacker and grinder. The latter will bring a clutch of fellow grinders and switch-blade surgeons with him to perform various modification procedures on the braver and more visionary among us.

Your humble servant will also be speaking. The preliminary title of my speech is “Cliodynamics: Moving Psychohistory from Science Fiction to Science.” Other conference speakers include RU Sirius, Rachel Haywire, Randal A. Koene, Apneet Jolly, Scott Jackisch, Shannon Friedman, Hank Pellissier, Roen Horn, and Maitreya One.

Time/Location: February 1, 2014 (Saturday) from 9:30am-9:30pm at the Firehouse, Fort Mason, 2 Marina Blvd., in San Francisco.

Buy Tickets:

Tickets are on sale from November 1-30 for $35. Only 100 tickets are available due to limited seating. In December tickets will cost $40 (if they’re still available). In January they’ll cost $45, with $49 the at-the-door price.

To obtain a ticket, PayPal $35 to account # [email protected] – include your name. You will quickly receive a receipt that you can print out as your ticket, and your name will be added to the guest list.

Below is a photo gallery of everyone on the lecture list and some further details:

[Read more…]

A Meeting with Hank Pellissier

This Sunday I had the pleasure of meeting up with Hank Pellissier, who used to work for the IEET, a futurist/transhumanist institute, and is now a blogger-journalist and amateur researcher at the Brighter Brains blog.

2013-09-15 16-59-15 - meeting with Hank Pellissier

As one may glean from the title of that blog, his current area of major interest lies in IQ and how you can bolster (or deflate) it. His most recent book is 225 Ways to Elevate or Injure IQ, the product of four years of research consisting of trawling through and summarizing the existing academic literature on the topic. In the meetup, he expounded upon his work.

Much of it was commonsense, or otherwise of no surprise to people who take an active interest in the topic. Some it was also of doubtful validity, with correlations not always being substantiated by a solid case for causation. But some of it was also new, counter-intuitive, even surprising. Certainly all this material is well worth publicizing and pushing into the public debate because quite apart from the intrinsic individual benefits of higher IQ’s it also leads to more efficient economies, higher technological growth, lower crime rates, etc.

Here is a list of most of these 225 IQ factors from Pellisier’s website. Below is a rough classification and brief discussion of some of the most important and interesting points from his research.

[Read more…]

Navigating The Collapse Map – Transhuman On The Dark Mountain

Heard of the political compass? Well, one enviro person compiled something similar for those who seriously entertain the possibility that industrial civilization will collapse. (H/t Mark Sleboda for pointing me to it.)

collapse-scheme

Needless to say, the “deniers” are almost as absurd as the “rapturists.” All the business as usual scenarios lead to collapse by mid-century.

“Deep green activism” of the Derrick Jensen variety is not only negative but profoundly futile. Not to mention rather clownish (“Every morning when I awake I ask myself whether I should write or blow up a dam”).

Neither “elites” nor “communities” can have anything to do with “salvation”, which in this context is bringing humanity back within global limits. That is because people are short-sighted and myopic, and the elites – be they democratic or authoritarian – have to cater to their tastes to remain in power.

As regards communities in the context of transition/”resilience”, an elementary consideration of human psychology and the history of state formation will show that to be a BS prospect. It just won’t work. Either you have to settle in remote places at the end of nowhere, or you will have to deal with the local warlords, “zombies” (climate refugees), and the harsh realities of a technologically regressed environment itself. In this climate, the most viable and “resilient” political units will be highly militarized, patriarchal, and probably led by strongmen (“He who doesn’t feed his army, will feed another” – Napoleon).

So by the process of exclusion we are only left with (D) Technoutopians, (J) Dark Mountaineers, and (K) Neo-Survivalists.

Neo-survivalism just makes sense at any level be it individual, familial, or local; it’s always a good idea to hedge against catastrophic outcomes. Even if we magically solve the AGW and general sustainability crisis there will still be the prospect of economic depressions, or Yellowstone erupting, or air force base commanders obsessed with precious bodily fluids going a “little crazy” in the head… In short, there is no point even arguing against it.

Transhuman on the dark mountain - Romanticism.

Transhuman on the dark mountain – Romanticism.

While it might sound contradictory, I am also both a Dark Mountaineer* (cool name!) – a Technoutopian.

In the sense while that I am convinced “business as usual” will lead to collapse, there is a significant chance that civilization will develop real technological solutions to the sustainability crisis, such as effective geoengineering, ubiquitous self-assembling nanotechnology, or the technological singularity.

There is nothing far fetched or historically unprecedented about this. Historically, some societies solved their Malthusian crises and continued steamrolling ahead (e.g. mid-period Song China, early medieval England when its wood ran out and it seized on the idea of using coal instead, or the biggest example of them all – the Industrial Revolution in Europe). In fact, the new science of cliodynamics suggest that when a society encounters ecological stress, it tends to redouble investments into finding ways of further increasing the carrying capacity (this can be called the “Boserupian Effect“). Of course for every success story there were multiple failures: The Roman Empire, all the Chinese dynasties prior to the current Communist one, the Mayans, the Easter Islanders, etc.

The 21st century is as I’ve remarked a few times basically dominated by a “race of the exponentials” between technology and ecological/civilizational collapse.

And if technology fails, then one must face the spreading desert, the Olduvai Gorge, the Dark Mountain… Here is what its founder wrote:

For fifteen years I have been an environmental campaigner and writer. For two of these years I was deputy editor of the Ecologist. I campaigned against climate change, deforestation, overfishing, landscape destruction, extinction and all the rest. I wrote about how the global economic system was trashing the global ecosystem. I did all the things that environmentalists do. But after a while, I stopped believing it.

There were two reasons for this. The first was that none of the campaigns were succeeding, except on a very local level. More broadly, everything was getting worse. The second was that environmentalists, it seemed to me, were not being honest with themselves. It was increasingly obvious that climate change could not be stopped, that modern life was not consistent with the needs of the global ecosystem, that economic growth was part of the problem, and that the future was not going to be bright, green, comfy and ‘sustainable’ for ten billion people but was more likely to offer decline, depletion, chaos and hardship for all of us. Yet we all kept pretending that if we just carried on campaigning as usual, the impossible would happen. I didn’t buy it, and it turned out I wasn’t the only one.

That’s pretty much the exact realization I reached a year ago. The scenario in which the tossed coin lands on the other side to the technological silver bullet.

But whatever happens there’s no point in worrying about it or emotionally overinvesting oneself into it. That is why the Dark Mountain is so appealing. After all does the beer yeast worry that the booze generated by itself and its fellows will eventually doom them all? Of course not. And you are presumably far more intelligent than a beer yeast.

ARCS Of Progress – The Arctic World In 2050

Editorial note: This article was first published at Arctic Progress in February 2011. In the next few weeks I will be reposting the best material from there.

The Arctic to become a pole of global economic growth? Image credit – Scenic Reflections.

Behold! Far north along the shores of the Arctic a quiver of upspringing settlements fringes the coast. Boats swarm around canning factories, smoke flutters above smelters, herds of reindeer dot the prairies… And here or there, on every street-corner, glimmer out the lights of theaters where moving-pictures entertain white people through the sunless weeks of the midwinter dancing-time, the singing-time, the laughing-time of Eskimo Land.

– Northward ho!: An account of the far North and its people.

In 2003, Goldman Sachs economist Jim O’Neill wrote the now famous paper Dreaming with BRIC’s, predicting that Brazil, Russia, India and China would overtake the developed G8 nations within a few decades and make astounding returns for faithful investors. The BRIC’s concept entered the conventional wisdom, spawning a host of related acronyms (BASICBRICSA, etc) – and if anything, realizing its promise well ahead of schedule. Last year, China’s real GDP possibly overtook America’s, and Russia’s approached Germany’s.

Yet for all their successes, the BRIC’s may not fulfill their expected roles as the stars of the global economy in the 21st century. The level of education is horrid in Brazil and atrocious in India; without the requisite human capital, these two countries will find it difficult to rapidly “converge” to developed world standards. China is much better off in this respect, but its high growth trajectory may in turn be disturbed by energy shortages and environmental degradation. China produces half the world’s coal, which is patently unsustainable given its limited reserves. But since coal accounts for 75% of China’s primary energy consumption and fuels the factories that keep its workforce employed, there is little it can do to mitigate this dependence. Meanwhile, China’s overpopulation, pollution and climate change predicament is so well known as to not require elaboration. Many other countries flirting around the edges of BRIC status – Indonesia, South Africa, Vietnam, etc. – face serious challenges in the form of low human capital, uncertain energy and food supplies and a rising incidence of AGW-induced droughts, floods and heatwaves.

There is one global region that may hold the key to resolving these intertwined problems – and even to become a major pole of global growth in its own right. For the most part, it is now an empty wilderness, but climate change is opening it up as potential living space. Its exploitation has the potential to halve the length of global freight transport routes while increasing their security, uncover sizable to gigantic new sources of hydrocarbons and minerals, and stabilize global food prices through the expansion of arable land. Its experience of management and conflict resolution may inspire a global model of cooperation – or it may degenerate into an economic, legal, or even military battlefield over shipping routes and sub-sea resources.

This global region is the Arctic Rim, and its adjoining ARCS: Alaska, Russia, Canada, and Scandinavia. The ARCS of Progress in the 21st century.

[Read more…]

The Brave New World Of Dennō Coil

In the Japanese TV series Dennō Coil, people wear Internet-connected augmented reality glasses and interact with a world that is now split between the real and the virtual. Citizens and netizens become one. The story is set in 2026, some eleven years after the introduction of this technology.

Considering that this series was first conceived of in 1997, the dates are remarkable accurate. Recently it was revealed that Google is working on a “Project Glass” that will become available to consumers for a cool $1,500 from late 2013 or early 2014.

Needless to say the usual cynics and technophobes have been making fun of the idea, going on about the ethical problems of facial recognition, announcing they will boycott the technology (yeah right), etc. I am unconcerned with all this. As with other mega-trends like global demography or climate change, contrary opinions are like a flimsy shack against an advancing tide, in other words, irrelevant. Fortunately, for the most part, technological revolutions increase wellbeing and are useful anyway.

In my opinion, the decisive technological development of the 2000’s was the mass proliferation of cell phones. In the late 1990’s, only a small percentage of people in developed countries had access to them, as well as a handful of businesspeople and high officials in the developing world. Today they are ubiquitous with global penetration at over 70%. Apart from making people much more connected – I can barely remember the days when one actually had to make strict appointments in advance – the sector also powered a mini-economic boom for both designers (Nokia, Samsung, etc), their manufacturing contractors in China, and the ecosystem of app developers they spawned in places like the Silicon Valley.

The augmented reality eyeglass revolution will be of similar or even greater scope. What is now almost unheard of outside the techosphere will begin to break out into the public consciousness by the mid-2010’s; substantial numbers of the global middle class will start wearing them by the late 2010’s; and by the mid-2020’s, this will be a thriving global industry with tons of spinoffs and applications. So much so that a proper name will surely have to be found for these glasses. Intelligent glasses? AReyes? Thinking goggles? Denno glasses? I like the sound of the last one so I’ll be using it until the term catches on or another replaces it.

[Read more…]