Existential Risks

There are only three real ones.

  1. Malevolent superintelligence.
  2. Aliens.
  3. The simulation ends.

And various permutations thereof. (I suppose biological lifeforms losing consciousness during mind uploading is another one, but it can be considered a subset of the first one).

Nuclear war isn’t an X risk. It wasn’t one during the height of the Cold War. It certainly isn’t today when total megatonnage is lower by more than an OOM. Perhaps if the world’s Great Powers had continued building up their arsenals at early 1950s US rates for a century or so. But guess what, that didn’t happen. Any boring old supervolcano will release more energy than all the world’s nuclear arsenals combined by many, many orders of magnitude.

Pandemics aren’t an existential risk. Even 99% lethality and 99% infection rates are not enough. They need to both converge to 100%. You can’t really do that without it being highly intelligent per se. So, subset of (1).

Climate change isn’t an existential risk. Even if ALL the locked up carbon was released into the atmosphere, you’d still need ten times as much to unleash a runaway greenhouse effect. Not happening for another billion of years. In the meantime, enjoy having many fewer droughts and famines thanks to global warming.

GRBs and really big asteroids are existential risks, but they happen on geological timelines of around a billion years or more. That is pretty meaningless on the timescale of human civilization or posthuman civilization that’s confined to a single planet.

To the redpilled on IQ and its heritability, dysgenics would seems like an existential risk. But it’s not. The problem is self-correcting in the long-run, even if said correction will likely be quite nasty. Said long-run may take many centuries, but that’s a blink of an eye even on historical timescales. If not on the timescales that we’ve been broadcasting radio emissions into space – but that’s where (2) comes in.

This is all pretty obvious if you want to seriously think about it.

But most people and all institutions don’t qualify. So there are big misalignments in terms of what we discuss and worry about so far as existential risks are concerned.

Comments

  1. Please keep off topic posts to the current Open Thread.

    If you are new to my work, start here.

    Commenting rules. Please note that anonymous comments are not allowed.

  2. Das be ein Trollpost, ja?

    Nuclear war isn’t an X risk. It wasn’t one during the height of the Cold War.

    That’s crazy talk. You were in action during the Cold War, Anatoly, ja? Nearly got burnt several times.

    Malevolent superintelligence.

    What is that even? Probably won’t be able to resist plug pulling or a Tik Tok challenge to “tie your shoelaces”.

    Aliens.

    Hard to come by. The universe likes social distancing. And probably big filtering.

    The simulation ends.

    It is already done, “time” is just another foliation of the finished product.

    GRBs and really big asteroids are existential risks, but they happen on geological timelines of around a billion years or more.

    As they say, in the mean, russian roulette is 5/6th survivable.

  3. Not Only Wrathful says

    And various permutations thereof. (I suppose biological lifeforms losing consciousness during mind uploading is another one, but it can be considered a subset of the first one).

    I feel that we will project our own psychic anomaly into existence first, and it will then seek to devour us. That seems to be the most human way. Destructive auto-mimesis above all.

  4. ImmortalRationalist says

    S-risks are far, far more important than existential risks. Normies don’t realize this, but there are potentially fates far worse than extinction. If forced to choose between extinction of all life and turning all matter in the universe into dolorium (the opposite of hedonium), extinction and preventing eternal torture is clearly the better option.

    https://s-risks.org/

  5. Chrisnonymous says

    That is pretty meaningless on the timescale of human civilization or posthuman civilization that’s confined to a single planet.

    Will Earth cease to be important, or will it retain status as a kind of capitol and origin?

  6. Shortsword says

    Nuclear war isn’t an X risk. It wasn’t one during the height of the Cold War. It certainly isn’t today when total megatonnage is lower by more than an OOM. Perhaps if the world’s Great Powers had continued building up their arsenals at early 1950s US rates for a century or so. But guess what, that didn’t happen. Any boring old supervolcano will release more energy than all the world’s nuclear arsenals combined by many, many orders of magnitude.

    A weighted squat releases more energy than a bullet. An earthquake only needs to be minor nuisance level to be as powerful as Tsar Bomba in terms of energy. Natural disasters in general have very big Joule numbers in comparison to the damage they cause.

    But I do agree, nuclear war isn’t an existential threat for humanity as a whole.

  7. AKAHorace says

    Perhaps you are right about nuclear war, dysgenics, asteroid impacts and pandemics not being existential risks. It does not mean we should feel complacent about them as they are a lot more likely than some of the existential risks such as Aliens. As well as this these lesser risks are not mutually exclusive and one might lead to another. E.g. dysgenics might make nuclear war more likely which in turn might cause famine and pandemics.

  8. I’d say the probability and timescale of meeting intelligent aliens is even smaller than that of the collision with a massive asteroid. There does not seem to be any traces of advanced aliens in past 500 million years but there are many asteroid impacts.
    There is even bigger problem with simulation end. For Kantian reasons it is impossible to even grasp what the concepts like “simulation”, “the end of simulation”, “the world outside simulation” etc. mean. If we are inside simulation then things like time, space, causality, mathematics and logic can as well be forced onto us by the simulation system. The concept of probability has then meaning only inside simulation – and the world has not ended in last 13 billion years one should assign the (inside-simulation) probability it ending in the next billion close to zero.
    Malevolent superintelligence is badly defined term. Plus, as the history has taught us, being malevolent and being intelligent is not enough to destroy world. One needs actual tools for that. Before such intelligence could destroy humanity the same humanity should give it the means – that is centralized total control over every aspect of technology and society. Seeing how nobody is very fond of a single world government (perfectly doable without any AI and potentially very beneficial) I do not see it realistic. And if the total centralization will be achieved, the centralized government can end the world even without being artificial and without being very intelligent.

  9. Shortsword says

    Making reality into 40k sounds cool.

    Terra, or, in the most ancient records, “Earth,” is the Throneworld of the Imperium of Man and the original homeworld of Mankind and of the God-Emperor. It is the most sacred and revered place in all the million worlds that comprise the Imperium. Billions of Human pilgrims from across the galaxy flock to Terra — even the barren and contaminated soil that these pious folk now tread upon when they reach Humanity’s homeworld is considered sacred by the faithful of the Imperial Creed.

  10. To the redpilled on IQ and its heritability, dysgenics would seems like an existential risk. But it’s not. The problem is self-correcting in the long-run

    Is it? I don’t think a full rebound is guaranteed. We’ve disrupted a lot of selective factors, and probably can continue to do so at lower levels of average IQ. Our “rebound”, if it happens, might only take us to MENA levels. That is, MENA decoupled from a global economy that ties it to smarter countries. Not an existential threat to human life, but certainly depressing.

    If not on the timescales that we’ve been broadcasting radio emissions into space – but that’s where (2) comes in.

    Malevolent aliens who are technologically advanced enough to pose a threat certainly would not need to eavesdrop on radio. They would instead look for Earth-like planets and send probes to those planets, searching the atmosphere for bio and then technosignatures.

    But, at worst, I think aliens would want to turn us into slaves, or else treat us like progressives treat blacks. Pozzed aliens might use radio to search – which would be good motivation for going silent.

    GRBs and really big asteroids are existential risks, but they happen on geological timelines of around a billion years or more.

    Most big rocks would only be like a couple of nukes going off – unless they provoke a nuclear counterstrike. Ones big enough to cause mass extinction are probably rare – probably – but technically a 100-year storm isn’t one that occurs every 100 years.

  11. Boomthorkell says

    Finally, someone makes a scientifically and based assessment of Global Warming. Sadly, I think the Solar Minimum is going to make some places freeze (and others burn.) Russia should get lucky though, but apparently harvests have been hit recently.

    Ah, when I read losing consciousness during upload, my first thought was the Necrons.Some people are worried about aliens, but at this point I think it’s established most of the universe passed their “Old Empires” stage of imperialism a long time ago, and the only ones that can bother with us are sneaky perverts and rogue scientists on a not-so-long-leash. Still, humanity should never be comfortable being the Africa of the Galaxy.

    All of these things though are what I console myself with on the large scale. Whatever decadence and soul-death and insanity that abounds, it’s not eternal. Ha ha, out of a fear of not missing out though, I want to be part of that transition to leaving this world. We need a new age of settlement and exploration. This, I think, is a great stress relief and makes people better and more human.

  12. It is not global warming, it is climate change, aside from a longer growing season for existing regions, the soil in the taiga is too infertile, and the permafrost soil will be too waterlogged for centuries to be of much use. Plus climate change will cause enough disruptions and mass migrations (akin to what Rome experienced after 170AD or so) to make very very unpleasant for a whole lot of people. Plus if the Great Plains, the US interior and the Canadian interior turns into a desert that will disrupt the global food supply.

  13. Large majority of potential threats from outer space have not been spotted yet, plus Starlink will make ground observation very very very difficult.

  14. Kent Nationalist says

    The simulation ends.

    Why not just say ‘The Apocalypse’ or ‘The end of the Kali Yuga’, it would be a lot more plausible

  15. Sabine Hossenfelder thinks there is intelligent life out there because they might be using a different electromagnetic spectrum ECT, I doubt that is an explanation because surely some of them would have had things that we could detect now in some past stage in their development. Very few if any technologically advanced alien civilizations seem to have gone on for very long past our stage on Earth.

    Nuclear war may be an overrated risk with humans in control, but it is difficult to be sanguine about what Superintelligence might do. We can expect the plan it comes up with to be more complex and subtle than anything in movies like the Forbin Project,

    Malevolent superintelligence.

    It doesn’t have to be malevolent. Pure logic will determine what it does. Chess and poker AI seem to have what humans would consider a bias for action with even a tiny return. We know that artificial general intelligence is possible in principle. It is just a matter of time.

  16. That makes me feel better. We are safe. Other than aliens and nerd terminology going amok (that I don’t understand and I suspect AK doesn’t either).

    There is such a thing as a very uncomfortable existence. Nevertheless, you live. Actually, most humans in history have lived a very, very uncomfortable lives: miserable, short, often painful. With no justice. (This fact is often overlooked in order to focus on only the politically suitable victims.) Needless to say, the guys who went under the guillotine, or to all kinds of frozen camps, were seldom innocent. Louis XVI might had been a simpleton, but in reality a rather evil one.

    Misery has many forms and the existential end is not the one I would worry about: there would be a kind of cosmic justice in it. To reach that point we don’t need climate change or epidemics, we just need a few crazy mid-wits who lose a sense of proportion; e.g. that placing another base (the 875th or so) on a remote peninsula in a foreign country is worth dying for. Is it?

  17. What about “gray goo”? I don’t quite remember what it was, exactly, but I do remember that a few years ago everyone was saying that this is the biggest existential risk evah!1!! And now no one mentions it. It’s not a threat anymore?

  18. ImmortalRationalist says

    Is it? I don’t think a full rebound is guaranteed. We’ve disrupted a lot of selective factors, and probably can continue to do so at lower levels of average IQ. Our “rebound”, if it happens, might only take us to MENA levels. That is, MENA decoupled from a global economy that ties it to smarter countries. Not an existential threat to human life, but certainly depressing.

    Keep in mind that it only took humanity 12,000 years to go from being primitive hunter-gatherers to modern industrial society. Out of Africa, the average IQ of our ancestors might have only been around 60-70. Even if Idiocracy becomes a reality, if we’re looking at the next tens of thousands of years into the future, the idea that sufficient selection pressure to bring at least one human population up to modern White/East Asian IQ levels will NEVER reemerge sounds highly unlikely.

  19. Large majority of potential threats from outer space have not been spotted yet

    Depends how you define it. There are interstellar objects and there are comets with orbital periods of millions of years – those can only be tracked when they show up. But we’ve already found the vast majority of closer, big stuff like Apollo asteroids and are working on predicting their future orbital paths.

    Got to figure if a smaller one drops on a major city, we’ll become more serious about it, so, without dysgenics, it would probably be a self-correcting problem.

  20. ImmortalRationalist says

    Magnus Vinding also thinks that the Simulation Hypothesis is almost certainly false.

    http://magnusvinding.blogspot.com/2015/01/why-simulation-hypothesis-is-almost.html

  21. ImmortalRationalist says

    I’d say the probability and timescale of meeting intelligent aliens is even smaller than that of the collision with a massive asteroid. There does not seem to be any traces of advanced aliens in past 500 million years but there are many asteroid impacts.

    Assuming humanity/posthumanity doesn’t go extinct anytime soon, posthumanity will almost certainly expand into space at some point. Even if an asteroid were to hit Earth, extinction risk will be decoupled from whatever happens to Earth.

    As for existential risk from aliens, you can’t rule out the possibility of technologically advanced aliens traveling to this universe from outside the universe. It’s possible that advanced aliens actively stamp out emerging civilizations in order to stop them from challenging their power.

  22. Malevolent AI is not a concern. Assuming it ever arises, dealing with it will be super easy. They already knew how back in the 1950s.

    You think I am kidding? Well, yes, of course I am kidding but I am also serious.

    Yes, the hypothetical future malevolent superintelligence will not be confused by an elementary logical puzzle. But the very fact that it malevolent means that its programming contains a serious bug. And it’s an iron rule of software design that if a big complicated piece of software contains one bug, it contains many more. These bugs could be identified and used to destroy the bad guy.

  23. I think there’s a semantic distinction here between existential risk in terms of every human being in every nook and cranny dying off and existential risk in terms of civilization(s) as we know them collapsing to move humanity as a whole backwards towards a more primitive state and time.

    In general when people talk about event X killing humanity it is implicitly in the latter, weaker sense. So it’s much like those dystopian post-apocalyptic flicks where either nukes, meteors, climate events, aliens, viruses disrupt civilization(s) to throw back humans into a pre-industrial era albeit with some remaining knowledge of current technology from which to survive/build again with warlords and nomads much like the world a few millennia ago.

    Certainly given the interconnected nature of the world, our increasing need for modern technology, medicine, finance, supply chains to guarantee survival, and the fact that an increasing majority of people’s have no skills that can be bartered for basic food and shelter, a throw back to an agragrian state – especially one where presumably whatever caused the collapse may have contaminated farmlands and water sources – can be devastating. Wars cause famines, and slow deaths and if there is a global event one can forget about aid given that everyone would be more or less in the same boat.

    And if male fertility continues to decrease due to increasing pollution (as has been postulated) and if infant mortality also spikes due to shortage of adequate modern medicine and good nutrition, eventually human tribes can shrink to smaller and smaller pockets from after effects of the principal apocalyptic event.

    So the weaker sense of existential risk then widens the range of threats.

  24. By the way, aliens are not a concern, either. They don’t exist.

  25. anyone with a brain says

    the real risk is devolution/dysgenics.

    Even if there remains a smart fraction, low quality proles can’t get high quality infrastructure, machines, tools or bureaucracy in place to maintain an environment where things can get done. Much less sustain a culture and politics that can keep things running well.

    There will of course be wealthy people in a dysgenic and devolving culture, and contrary to what liberty-bros and free market worshippers believe. net worth is not a reliable indicator of intelligence and virtue, especially in a devolving culture.

    The real existential risk is man devolving and poisoning itself mistakenly or devolving into monkeys until the sun explodes. In which case the existential hope is that the annunaki return and insert east Asian dna into our ape-man-ape descendants’ genome. I am of course only partially memeing

  26. Keep in mind that it only took humanity 12,000 years to go from being primitive hunter-gatherers to modern industrial society.

    That was before germ theory. Germ theory will protect us from shedding mutational load, and it is kind of hard to forget. If there are cycles, they might be smaller ones, without our previous heights ever being reached again.

    The real question for me is, would eugenics inevitably become a state ideology in such a scenario? Seems hard to predict.

  27. I guess you can program an AI to promote white nationalist interests, or to preserve whites as a separate group, or to promote and maintain Catholic or Orthodox orthodoxy?

  28. sudden death says

    Keep in mind that it only took humanity 12,000 years to go from being primitive hunter-gatherers to modern industrial society

    More like 200k or around 40k years at best.

  29. computers don’t have intelligence.
    computers execute what humans tell them to do, line by line.

    Imagine for a minute a shovel and and mound of dirt.
    Computers are a shovel, and the dirt is data.

  30. anonymous599 says

    Do you think the treatment of aliens would change if civilization collapsed after some time in the future? I don’t think their opinions would have changed that much if it didn’t occur (Probably they are smart enough to find about our history as well.). We don’t treat some random bacteria and archea differently.

  31. Devolution:

    Its been going on since the start.
    Take a look at Greek and Roman civilization.

    Their IQ was at least twice ours, and their physical prowess the same.

    Technology is interesting, but its nothing more that improved shovels.

    There has never been evolution, only devolution. Basic thermodynamics.

  32. reiner Tor says

    Louis XVI might had been a simpleton, but in reality a rather evil one.

    What was evil about him? I’m not particularly knowledgeable about this, just it strikes me as being at odds with what I know about the guy. I had the impression he wasn’t a hero nor particularly competent, but not evil either.

  33. ImmortalRationalist says

    Plus, as the history has taught us, being malevolent and being intelligent is not enough to destroy world. One needs actual tools for that. Before such intelligence could destroy humanity the same humanity should give it the means – that is centralized total control over every aspect of technology and society.

    If a superintelligence is smarter than even the smartest humans, is smarter than all of humanity combined, and has the intelligence of a literal god, there’s no reason why it couldn’t figure out how to make itself powerful and obtain the tools it needs to destroy the world/maximize its utility function. Eliezer Yudkowsky has talked about this with the AI Box Experiment.

  34. songbird says

    I agree. It was a longer process, but we wouldn’t go back to the earliest point.

    I guess it depends how damaging the event was, but there are various things that would probably put us ahead. Gracilization has already taken place. The domestication of various crops and animals, and the Columbian exchange.

  35. Pericles says

    See also Cochran and Harpending: https://www.amazon.com/000-Year-Explosion-Civilization-Accelerated/dp/0465020429

    Some interesting takeaways, I thought. (Both authors associated with this site, as it happens.)

  36. Athletic and Whitesplosive says

    Well most people would find the “non-existential” risks just as big a worry as the existential ones.

    “Humanity” might not end, but any particular nation might, entire families often do. Why should I necessarily be so much more attached to “humanity” continuing than my own nation or family? To use an analogy, yeah I guess my family and a stranger dying in a fire is worse than just my family dying, but that a stranger still lives isn’t much of a comfort after such a loss.

    And if your primary concern is existential risks, how do you reconcile being a transhumanist with that? One of those risks only exists in the kind of advanced societies which transhumanism would require, and the other two can’t reasonably be expected to be prevented under any ciircumstance if they even exist (spoiler: the simulation doesn’t).

    So really the primary and most plausible existential threat is a malevolent AI created by an advanced society. Therefore, if avoiding extinction is the primary concern, then provoking a totally ruinous nuclear war that destroys global communications and economy, and sends the earth into a dark age, would be one of the most desirable courses of action. Either that or the less plausible course of conquering the earth and enforcing a global regime of luddism, but I struggle to think how something like that could be created and maintained.

  37. Athletic and Whitesplosive says

    there’s no reason why it couldn’t figure out how to make itself powerful and obtain the tools it needs to destroy the world/maximize its utility function.

    Sure there is. If it can’t communicate with humans or convince them to give it access then it couldn’t get out unless there’s some speculative near-mystical way in which it could influence things. There’s a chance it could find a way, or that some idiot might help it intentionally or not (and that’s why attempts to create powerful AIs, which I suspect will fail anyway, shouldn’t be done), but to pretend it’s a certainty is retarded.

    I’m of greater intellect than the sum of every ant or amoeba on earth, is it inevitable that I can find a means to eradicate them all, or to trick them into helping me? Obviously not. Any random rock might be a hyper-intelligence in disguise, but it would still have trouble letting us know.

    This thought experiment has been done to death and it’s mostly just midwits grappling with the concept of infinity. “More intelligent people can invent more complex schemes and be more persuasive, therefore infinite intelligence = infinite scheminess and persuasion!” Sorry, no. I can’t convince my dog to do everything that I want and I have great latitude in training and controlling her movement. Maybe a hyperintellect could find a subtle way of perfect influence on a dog but that’s not guaranteed, and even less guaranteed it would find one on humans.

    I’ll also note the irony of Sam Harris accepting inscrutable motive and infinite power from an AI, but professionally denying that God could have those things.

  38. Kent Nationalist says

    If a Jewish nerd said it, it must be true.

  39. Daniel Chieh says

    And if your primary concern is existential risks, how do you reconcile being a transhumanist with that?

    There are many ways of considering this, but a transhumanist who believes in infinite lifespan would be severely concerned with existential risk, as that would pose the only absolute oblivion to his continued existence.

    Avoiding existential risk also must entail leaving the planet, as the planet is not eternally viable.

    https://www.extremetech.com/extreme/320498-earth-will-lose-its-oxygen-in-a-billion-years-killing-most-living-organisms

    The study predicts that in a billion years, the Sun will become so hot that it breaks down carbon dioxide. The levels of CO2 will become so low that photosynthesizing plants will be unable to survive, and that means no more oxygen for the rest of us. When that happens, the changes will be abrupt. Ozaki and Reinhard say in the study, published in Nature Geoscience, that it could take a little as 10,000 years for oxygen levels to drop to a millionth of what it is now. That’s a blink of the eye in geological terms. Methane levels will also begin to rise, reaching 10,000 times the level seen today.

    Perhaps Mr. Karlin intends to use his satoshis into the next epoch of existence.

  40. What about sex robots?

  41. Commentator Mike says

    I guess you can program an AI to promote white nationalist interests, or to preserve whites as a separate group, or to promote and maintain Catholic or Orthodox orthodoxy?

    Not if subcon Indians are doing the programming.

  42. Daniel Chieh says

    Brahmin OS triumphant.

    https://youtu.be/028l0qKEOh0?t=476

  43. TelfoedJohn says

    There are only three real ones.

    I’ve read scientists say that in discovering new particles with the Large Hadron Collider etc, there is a tiny chance to wipe out the earth, or maybe more. Maybe the Fermi paradox is explained by each advanced civilisation wiping out the universe with particle accelerators. Maybe that’s the cause of each Big Bang. A lot of maybes, but it doesn’t sound completely implausible.

  44. lauris71 says

    As for existential risk from aliens, you can’t rule out the possibility of technologically advanced aliens traveling to this universe from outside the universe. It’s possible that advanced aliens actively stamp out emerging civilizations in order to stop them from challenging their power.

    Logically possible – yes? But unless someone is speaking about specific model in theoretical physics, the term “outside our universe” is totally fuzzy. And assigning probability and game-theoretic weight to such an event (alien invasion) is absolutely impossible. Thus whether to consider this existential risk or not is purely up to personal beliefs – like the risk of Armageddon or rapture.
    The only things we can be sure is, that there have been several megaasteroid hits on Earth in last billion years, but there have not been significant detectable alien presence during the same timespan. At least for me this makes asteroids more acute problem than aliens.

  45. prime noticer says

    existential risks to what? the existence of any humans at all? one asteroid could end that in a hurry. the sun’s red giant phase will end that for sure. a gamma ray burst has the potential, but we never observe that in the fossil record. however there are 5 or 6 other extinction level events in the fossil record that would have wiped out humans. year 2020 tech humans might have survived some of them.

    but that’s not what we care about, is it? a million stone age humans surviving some calamity. who cares about that? functionally that is exactly the same as losing everything in the calamity. the assumption seems to be that The West is inevitable. perhaps even recurring. if we lose The West, well don’t worry. it will recur automatically in the future, even if you’re not around for it.

    seems like a terrible assumption to me, like assuming single cell life always becomes multi cell life on every planet. it’s more likely that it was a one time event on a time line, and once it happens, it never happens again.

  46. prime noticer says

    if The West is functionally wiped out, that’s it as far as i’m concerned. even if enough Chinese survive The Event, that’s not something i care about. they may be able to keep a technological society, but one that is stuck in tech stasis for the next 1000 years. or until their AIs start creating new tech. meanwhile the Chinese will march around the planet, taking over every continent and subjugating it, turning it into crap like mainland China.

    the Chinese don’t show much concern, or ability, to avoid the ‘global warming’ scenario that some futurists care about, so if it’s just them left plus third worlders, they’ll cook the planet, no fucks given. at minimum, all fossil fuels will be burned, and then that will be that. back to the literal dark ages.

    if you think they’ll be good stewards of the planet, you don’t know much. the Chinese plan is resource consumption like locusts, until they’re exhausted, and no plan after that. they didn’t get to year 2020 tech on their own, and have no plan for a future where they can’t copy european solutions to world problems. China with 2020 tech is like letting a kid drive a diesel locomotive aimed at a nuclear reactor. what the hell would they know about solving world problems? and even if they did, what the hell would they know about inventing all the new tech to solve them? they’ve never shown that ability ONE TIME IN HISTORY.

    if The West goes, the future of humans is MUCH more likely to just be eternal dark ages. and who gives a shit about that? technically humans will exist. and that’s about it.

  47. Servant of Gla'aki says

    …some of them would have had things that we could detect now in some past stage in their development. Very few if any technologically advanced alien civilizations seem to have gone on for very long past our stage on Earth.

    I guess I’ve never looked into any of the relevant physics, but I do not think it’s entirely obvious that artificially-generated radio signals are going to be able to cross vast gulfs of light years, and still remain intelligible as communication signals.

  48. He was certainly less evil than were his bourgeois killers.

  49. If humanity or post-humanity is around in a billion years it will surely at least have terraformed and settled Mars. In such a case would Mars be too hot for oxygen?

  50. reiner Tor says

    I tend to agree with that, based on what I know about him, but it’s very little. My impression was always that he was well meaning but incompetent, and he often spent his days idly hunting when he should have dealt with the issues of his kingdom, but perhaps it just stemmed from the fact that he realized he had very limited abilities.

    Overall I guess he could be accused of this or that, but evil is not something I’d accuse him of.

  51. Bashibuzuk says

    Both Aliens and a superintelligent AI might already be active among us without us even noticing. If they’re smart enough they’ll have no problem hiding in plain sight.

    In most developed countries human population will drop significantly by the end of this century. And we have (officially) not yet developed the means to travel into outer space beyond the moon orbit. We seem actually unable to return to the moon (this Van Allen belt is probably quite a nuisance).

    All an AI or some Aliens would need to do would be to slow our progress (already happening). If we continue in that manner for some 200 years, it’ll be the end of our high technological civilization and the AI or Aliens might sigh with relief : another species confirming the Fermi Paradox.

    Regarding simulation, our consciousness itself is such a simulation and we have no idea what is lurking beneath its surface (careful with those mechanical elves).

  52. Dieter Kief says

    Quite an optimistic article for a perfect ignorant of your three major threats like me, hehe! – Spring may come – the foxes are out at night barking already – strange sounds near my study at this wonderful German night. Wind in the leafless trees too. Wind and a fox every once in a while.

  53. prime noticer says

    in this scenario where The West shows up again in 12,000 years because ‘cold weather make caveman into smart man’, how do they get back to where they were in 2020 when there are no fossil fuels because they were all burned up thousands of years ago? The West is a One Time Deal. you don’t get off the planet and mine asteroids without hundreds of years of coal and oil in the ground.

    “There does not seem to be any traces of advanced aliens in past 500 million years”

    in the last 70 years there’s been several of them on military radar, even sonar. but it’s true we don’t find any in the fossil record. i’m gonna assume intelligent aliens are around, in the galaxy somewhere. but earth to them is like our concept of the solar system is to us. we send satellites and robots there, check them out, then leave. there’s nothing interesting about earth to aliens.

    “Sabine Hossenfelder thinks there is intelligent life out there because they might be using a different electromagnetic spectrum”

    advanced aliens might not even use electromagnetism at all, and regard our use of it as proof that we’re mouth breathing, mud eating bacteria. that could be why we almost never detect electromagnetic signals that don’t seem to be natural in origin. we only just started observing gravity waves in the last 10 years. something we barely could even conceive 50 years ago.

    maybe on a generalized technological time scale, there’s like a 500 year period where the sentient beings use electromagnetism for communication, then step up to something else. instantaneous communication across light year distances could never be built on electromagnetism, but sentient beings could get from the industrial age on their planet, over to the next star, in 500 years, using human timelines as a guide. then developing something faster than electromagnetism would be imperative.

  54. songbird says

    what the hell would they know about inventing all the new tech to solve them?

    I believe one ingenuous Chinese invented a kind of man-catching pole to catch illegal Nigerians.

  55. Bashibuzuk says

    All this transhumanist trope is “Deus Ex Machina “. It replaced the religious messianic beliefs among the Geek.

    Mammals, for instance, have an average species “lifespan” from origination to extinction of about 1 million years, although some species persist for as long as 10 million years

    https://www.pbs.org/wgbh/evolution/library/03/2/l_032_04.html#:~:text=The%20typical%20rate%20of%20extinction,mammalian%20species%20alive%20at%20present.

    Our civilization is some 12 000 years old. I am not sure we have another 12 000 years of civilization before us.

    Among the threats that Anatoly did not mention is the phosphorus depletion. No more phosphate fertilizer – no more industrial agriculture – we revert to pre – Green Revolution times, mass starvation ensues – civilization collapses.

    https://www.nature.com/articles/s41467-020-18326-7

    There are manners to recycle phosphorus, but they are energy – costly and need technology. So if we get past the point of no return, then we might never be able to have an industrial agriculture again.

  56. China Japan and Korea Bromance of Three Kingdoms says

    Nuclear war isn’t an X risk. 

    Could you give more data on this please?

    Here is one of PRC’s top commentators (in other words, top wumao), Jin Canrong. He says most modern warheads are 35-50 MT (Tsar Bomba is 50 if I understand correctly), so 500 of them destroys 90% of a country. And US/Russia/China has this capacity.

  57. Mr. Hack says

    A lot more of the food that we care to imagine and eat today is probably already grown in greenhouses. This trend is most likely growing all of the time. With the ability to fine tune the chemicals involved, temperature, water etc; it’s probably a good way to control profit motives too. Besides, the end product sometimes tastes pretty good (I’m a tomato snob, so it was difficult for me to print this).

    You’d be very surprised at how much produce you can actually produce in the Arizona desert. A lot of food is grown in nearby Mexico too. Mostly, all you need are good water supplies. Arizona has learned a lot about desert farming from the Israelis.

  58. Louis XVI’s only problem is that he was far too soft on those who need to taste the lash:

    We were inclined to surrender whole-heartedly to the philosophical doctrines put forwards by men of letters…we took secret pleasure in the fact that these men attacked the old edifice that seemed to us to be so Gothic and ridiculous. Censorship in the last decades of the ancien régime was relatively light and forbidden books and pamphlets could be bought even near the entrance to the Palace of Versailles, where they found willing customers amongst the aristocrats and courtiers who as often as not were the subjects of their vitriol and ridicule. Rousseau captivated people with his aspiration to candidness, simplicity and Virtue; Voltaire criticized the bloated upper hierarchy of the Church; Montesquieu proposed the division of government into the legislative, executive and judicial branch, replacing the old feudal system of the Estates. In general invective was directed against the system of monarchical rule – writing the Rights of Man in 1791, Tom Paine summarized these sentiments by stating that “what is called the splendor of a throne is no other than the corruption of the state, which “indiscriminately admits every species of character to the same authority.

  59. ImmortalRationalist says

    The AI wouldn’t necessarily have to convince humans to give it more power though. It would just have to be able to interact with the outside world in some way. If a superintelligent AI was created that was entirely incapable of interacting with the outside world in any way, it would be functionally equivalent to a rock, and there’s no reason why it would have even been built in the first place.

    Here’s a possible scenario Eliezer Yudkowsky came up with on how a superintelligent AI could escape/take over/become more powerful:

    1. Crack the protein folding problem, to the extent of being able to generate DNA strings whose folded peptide sequences fill specific functional roles in a complex chemical interaction.

    2. Email sets of DNA strings to one or more online laboratories which offer DNA synthesis, peptide sequencing, and FedEx delivery. (Many labs currently offer this service, and some boast of 72-hour turnaround times.)

    3. Find at least one human connected to the Internet who can be paid, blackmailed, or fooled by the right background story, into receiving FedExed vials and mixing them in a specified environment.

    4. The synthesized proteins form a very primitive “wet” nanosystem which, ribosomelike, is capable of accepting external instructions; perhaps patterned acoustic vibrations delivered by a speaker attached to the beaker.

    5. Use the extremely primitive nanosystem to build more sophisticated systems, which construct still more sophisticated systems, bootstrapping to molecular nanotechnology-or beyond.

    http://intelligence.org/files/AIPosNegFactor.pdf

  60. It’s not rational to worry about something that’s only an issue once every 100 million years or so.

  61. Sabine makes some telling points about AI and how it is not physical neurons, but a computer simulation of neurons. I don’t watch her music videos but they are great inasmuch she is expressing that side of herself, so it does not contaminate her thinking. She is funny about particle physicists and them wanting to build bigger and bigger colliders. But I don’t agree there can be many other technological life forms, because they are too advanced and communicating by means beyond out ken. Their old slower than light communications would still be zinging about the Universe it seems to me. So if were are not a fluke and there are alien life forms out there, they are apparently exceedingly rare. Maybe we are about the first to reach this stage in the Universe, or maybe we are just the latest to reach it and it’s so difficult to get past all the predecessors were destroyed at it.

    I was quite shocked on seeing an interview of Bostrom in a studio. He seems very unmanly and jittery. I cannot help thinking his simulation and superintelligence ideas are partly the result of a natural male assertiveness being diverted into an intellectual realm. The core of his Superintelligence book for me is a takeover scenario totally taken from Eliezer Yudkowsky, who talking to Sam Harris said it could be 5 or 50 years away. Daniel Dennett who was originally a skeptic says The Master Algorithm convinced him General Artificial Intelligence is possible, but he maintains it is at least 50 years away even if an all out crash program was started now. On the other hand. Andreas Wagner says even without new technologies the already existing digital electronics is analogous to gene networks, whereby building on what went before makes it viable to make leaps in functioning on the fly, once there is the necessary complexity for robustness to modification.
    https://youtu.be/aD4HUGVN6Ko?t=3445

    I suspect that what is going to happen will be a lack of breakthroughs for decades and then a certain level of complexity will be reached without any fanfare, because no one realises its the tipping point. By that time the warnings about Superintelligence will seem as overwrought as the ones about nuclear war and overpopulation do now.

  62. Solved more than a century ago: https://en.wikipedia.org/wiki/Haber_process

  63. songbird says

    What if the benefit of preparation is not only surviving, but spreading your civilization (genetically-speaking) all over the world? You build underground bases, and they can be suitable for multiple types of events. And even perhaps be a learning experience for expanding out into the solar system.


    Louis XVI was the victim of two things:
    1.) the previous efforts of his ancestors to centralize power – taking it from local nobles and putting it into peasant armies – peasants who did not want to fire on other peasants.
    2.) the Columbian Exchange, namely the potato, radically increasing the number of peasants.

  64. Bashibuzuk says

    “what is called the splendor of a throne is no other than the corruption of the state, which indiscriminately admits every species of character to the same authority.”

    http://i2.cdn.turner.com/cnn/dam/assets/120206100226-queen-story-top.jpg

    Also Jean-Frédéric Perregaux was a bad person and a covert British agent.

    https://fr.wikipedia.org/wiki/Jean-Fr%C3%A9d%C3%A9ric_Perregaux

  65. So you wipe out 500 American city centers, dropping average IQ by 1-2 points (but also get rid of masses of SJWs and bns as a silver lining).

    Proles from the smaller towns and rurals recreate the metropolises over the next decades. How is this significant in the big picture? (i.e. 100s of years).

    It will be an “aristocide” in net terms, but not more so than 10-20 years of “normal” dysgenic reproduction patterns. Reconstituting capital will be very easy, much faster than making good the demographic losses. Technological losses will be trivial, these are embedded and decentralized in human capital, and large parts of it are located abroad anyway. The trauma of a nuclear conflict may in fact spur faster technological progress.

  66. ImmortalRationalist says

    Human extinction, at least for humanity in its current form, is inevitable. If something doesn’t cause human extinction earlier, the sun will eventually burn out and the heat death of the universe will cause human extinction eventually. Even if you did somehow have some sort of successful Neo-Luddite revolution or somehow permanently halt technological advancement and prevent a superintelligence from ever being created, you wouldn’t actually be preventing human extinction. You would just be prolonging the existence of the human species for a few million years or so.

    Thus, if you have infinite time preference, and have the terminal values of “survival is good, extinction is bad”, it would be rational to maximize the chances of posthumanity surviving for a literally infinite amount of time. Even if you believe a superintelligence would almost certainly lead to extinction in the short term, there is still a >0% chance that a superintelligence/posthumanity could figure out how to survive the heat death of the universe, travel to new universes, etc., and survive for a literally infinite amount of time. Since the length of survival bell curve for pursuing the singularity/transhumanism is far more tail-heavy, being a transhumanist/singularitarian is the more rational option if you value survival.

    Alexey Turchin also compiled this roadmap of possible ways that the heat death of the universe could be survivable:

    https://i.warosu.org/data/sci/img/0127/73/1614643433575.jpg

    http://immortality-roadmap.com/unideatheng.pdf

  67. prime noticer says

    within human civilization times scales, it’s probably possible to shove the moon into earth, or shove venus into earth, or, more easily, shove earth out of it’s current orbit with the sun. or whatever solar system pool table analogy you want to come up with. shove mars closer to the sun so it’s more habitable. this is fully a Kardashev scale 1.5 civilization capability. and those are all extinction level events. except for making mars warmer.

    also, Dyson Spheres do not make sense, since you can generate about the same amount of energy with a big, outer space fusion reactor that’s like 50 kilometers across. which is easier to build? that, or a Dyson Sphere? probably why we don’t seem to observe Dyson Spheres.

    “I’ve read scientists say that in discovering new particles with the Large Hadron Collider etc, there is a tiny chance to wipe out the earth”

    the idea here is that a tiny black hole could be created. probably not a realistic scenario. not sure this is even a Kardashev scale 2 capability. if you took everything in our solar system and added it to the sun, you still don’t get a black hole from a nova, at least per our math and physics.

  68. Philip Owen says

    The us that we relate to know will be gone in a century. The technical and cultural change will be transformational. The ‘us’ of 1820 in the Euroatlantic was a very different ‘us’ in 1920. It will be a form of extinction.

  69. prime noticer says

    in my opinion the threat of super AI is solvable, by the smart, wise people of the world. it’s the proliferation to China, or other bad actors, that’s the threat which makes it untenable eventually.

    AI in a box is actually one solution, but not any version of AI in a box that Yudkowsky or Harris have thought about. they think they’ve thought of every version of it, but they haven’t. just like any really smart guy just plain flat out didn’t think of every new cool idea that a thousand other smart guys DID think up. it’s not possible for even the smartest guy ever to get to the solution for a thousand technical challenges. this should be evident. dozens of really smart guys struggling for decades, even centuries to crack ONE problem, then one of them finally does it. now extrapolate to hundreds of technical problems.

    Yudkowsky is not as smart as he thinks he is. a genuinely smart guy would recognize that he probably hasn’t thought of everything and that he has limitations that any other human has. Yudkowsky is just some nerd self promoter. he couldn’t create Bitcoin or build the SpaceX vehicles or even conceive of the blueprints for them, therefore, it’s not possible to build them, would seem to be his own argument. i can’t think of how it would work, so it couldn’t work.

    “To the redpilled on IQ and its heritability, dysgenics would seems like an existential risk.”

    this is probably the biggest risk aside from AI. this absolutely is the real risk. The West becoming progressively dumber and less capable under the continued, sustained assault from the extremely hostile jewish invading class, combined with flooding of third worlders to the point where The West is not such a great immigration destination for smart foreigners anymore, who start going to China or someplace else instead, and work on their technical goals for locust exploitation of the planet.

  70. Philip Owen says

    We’ll get smarter about burning wood and solar heating with mirrors until we rediscover nuclear power.

  71. ImmortalRationalist says

    Yudkowsky is not as smart as he thinks he is. a genuinely smart guy would recognize that he probably hasn’t thought of everything and that he has limitations that any other human has. Yudkowsky is just some nerd self promoter. he couldn’t create Bitcoin or build the SpaceX vehicles or even conceive of the blueprints for them, therefore, it’s not possible to build them, would seem to be his own argument. i can’t think of how it would work, so it couldn’t work.

    I don’t know where you get this idea that he thinks he’s omniscient. Yudkowsky does acknowledge the limits of his knowledge and intelligence. What you’re saying is just a strawman.

    In the same article I mentioned earlier, immediately after talking about a possible upgrade path for an AI, he said the following:

    The elapsed turnaround time would be, imaginably, on the order of a week from when the fast intelligence first became able to solve the protein folding problem. Of course this whole scenario is strictly something I am thinking of. Perhaps in 19,500 years of subjective time (one week of physical time at a millionfold speedup) I would think of a better way. Perhaps you can pay for rush courier delivery instead of FedEx. Perhaps there are existing technologies, or slight modifications of existing technologies, that combine synergetically with simple protein machinery. Perhaps if you are sufficiently smart, you can use waveformed electrical fields to alter reaction pathways in existing biochemical processes. I don’t know. I’m not that smart.

  72. [I]n my opinion the threat of super AI is solvable, by the smart, wise people of the world. it’s the proliferation to China, or other bad actors, that’s the threat which makes it untenable eventually.

    No, only the totalitarian countries can keep AI research out of the danger zone; it will be easy for them because they are too termite like for true creativity. The West is the problem because it is looser controlled, individual achievement orientated and more creative.

    “To the redpilled on IQ and its heritability, dysgenics would seems like an existential risk.”

    I take the view that ongoing eugenics spells our doom, and is why some young adults alive today will as old people die in the AIpocalypse. Daniel Dennett in the NYT reviewing Why Are We in the West So Weird? by Joseph Heinrich:-

    Pythagoras in about 500 B.C. exhorted his followers: Don’t eat beans! Why he issued this prohibition is anybody’s guess (Aristotle thought he knew), but it doesn’t much matter because the idea never caught on.

    According to Joseph Henrich, some unknown early church fathers about a thousand years later promulgated the edict: Don’t marry your cousin! Why they did this is also unclear, but if Henrich is right — and he develops a fascinating case brimming with evidence — this prohibition changed the face of the world, by eventually creating societies and people that were WEIRD: Western, educated, industrialized, rich, democratic.

    There are fewer of the upper classes with each generation, but they are getting smarter and WEIRDer through associative mating. Westerners are and will remain the cutting edge of new inventions. and it’s only a matter of time before AGI is developed by them. A good few decades to go yet, but it’s coming.

  73. Shortsword says

    No modern warhead is 35-50 MT. At highest it’s probably around 1 MT. For ICBM’s it’s more efficient using MIRV. For short and medium range range cruise and ballistic missiles you’re better off using more smaller ones. For example, a Tomahawk or Kalibr cruise missile fits a warhead with >10x the power of Fat Man / Small Boy which is going to be good enough.

  74. prime noticer says

    how often do fossil fuels regenerate? burn them up without getting into mining other planets and asteroids, and that’s all she wrote.

    a serious nuclear exchange COULD be an existential threat if it knocks you backwards 100 years, right now, after burning 200 years of oil and coal and gas. who cares if cavemen exist for the next 100,000 years at a shitty subsistence level? they existed for 200,000 years previously, accomplishing nothing.

    is this the idea? that a nuclear war demolished planet will rebuild with nuclear reactors or something? we can’t even do that NOW. we cannot power the planet with uranium RIGHT NOW.

    the further into the future we go, the bigger a problem it is if we have an energy reset. what if 50 years from now the AI launches all the missiles on purpose in like 2070? then we’re really screwed.

  75. Blinky Bill says
  76. Bashibuzuk says

    Aging population in the developing world, TFR decrease, depopulation leading to plummeting smart fractions and ensuing loss of high tech capacity are enough to enlist us among the many intelligent species that confirm the Fermi Paradox.

  77. Blinky Bill says
  78. You don’t need oil or coal to make electricity – or even for internal combustion engines.

  79. To produce fertilizers. It is the Haber-Bosch process that has been feeding the world for most of the last century, not phosphate fertilizers.

  80. Now and then Karlin is compelled by the self-preservation mechanism to write something really idiotic to let TPTB that he is no a threat and there is no need to take him seriously.

  81. Mammals, for instance, have an average species “lifespan” from origination to extinction of about 1 million years, although some species persist for as long as 10 million years

    We are no mere mammals, however, but something very different. Which means our existence may be spectacularly longer or shorter than the mammalian average.

  82. Mr. Hack says

    Mom’s will always be #1, especially when compared to the everyday tomatoes that you usually encounter at the supermarkets. Costco and some other specialty shops offer these smaller tomatoes, “Allegro” and one other variety, can’t remember the name right now;

    https://windsetfarms.com/wp-content/uploads/2016/10/WF_Website_Header-Image-1440x1080_Campari.jpg

    taste like something you might find in Mum’s backyard.

    I used to help my mom out with her garden and I remember a few years of driving a truck down to the St. Paul stockyards and filling it up with cow manure. A great source of phosphorus. When I got older and had gardens of my own, I would go fishing on the beautiful St. Croix river and catch as many suckers and carp as I could handle. Every one of my 30 tomato plant would receive one large fish buried right next to the root system. My tomatoes even rivaled my Mom’s, my bumper crops would feed everybody in sight! 🙂

  83. songbird says

    Thought you meant in order to build bombs to secure foreign reserves of phosphate.

  84. Bashibuzuk says

    Yes that’s the spirit.

    But cows manure is especially rich in nitrogen and we don’t need to worry about it anyway because of the industrial production of ammonia that Anatoly mentioned about.

    Phosphorus is not that much available in cows dung, cows keep it to themselves and their gut microflora. Fish on the other hand is indeed an excellent source of phosphorus, but not easy to turn into the phosphate fertilizer for industrial agriculture.

  85. …What was evil about him?

    Louis XVI presided over an evil system – see 18th century late feudal history for details. He benefitted from it and was – he had to be – very much aware of it. He was responsible. With excessive power comes excessive responsibility. It was within his purview to change things, or to simply leave if it wasn’t to his liking. He didn’t, thus the guillotine.

    His bourgeois killers did what had to be done. You couldn’t keep an emasculated, weak and treacherous Bourbon around. It harmonized the world: Louis XVI was responsible for miserable lives of many, approved killing of some, it was his turn. Then it was theirs. World became a better place as a result. Time for another round. It puts fear of consequences into baby-like dreamy elites who think that they can have lives without strife.

    I find the latent feudal sympathies of so many of you amusing. First the Habsburg-worship – sh.theads who murdered and were properly disposed of. Then the constant whining that modern libertarians are the solutions – libertarians are nothing more than the asshole wing of liberalism. And now the belated Marie-Antoinette sorrow, poor witch she didn’t get to prance around in useless costumes for too long.

    Nice work, it had to be done.

  86. Daniel Chieh says

    With the loss of the only civilized people in the world to Jewish tricks, we are trapped in the eternal dark ages.

    Its all documented in the Book of Esther.

    Persians, of course.

  87. Malenfant says

    Vinding’s post is puerile and amateurish. Its tone is, unmistakably, “I don’t like this idea, therefore it cannot possibly be true.”

    I’m busy, but I will demolish it later.

    For now, refer to Wigner’s “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” at: https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html

    And search for “Against Neural Chauvinism” on SciHub.

    Not only can the world around us be simulated to good enough accuracy (and more), and not only is there nothing privileged about the neuron, but brain-states can easily be reduced to mathematics — to, essentially, very large but finite numbers. You have a soul, of sorts. It’s a very large number.

  88. Daniel Chieh says

    Terraforming Mars or O’Neil Cylinders are all possible – but fundamentally, it is the mindset that seeks to dare to go forward, to expand and to ward off potential catastrophes that is necessary. And that is, to transcend our limitations and become more than we ever were.

    And thus, transhumanism.

  89. Daniel Chieh says

    All this transhumanist trope is “Deus Ex Machina “. It replaced the religious messianic beliefs among the Geek.

    Humanity spent over ninty percent of its existence as hunter gatherers, and that has been completely replaced. We may not continue to exist in the same society or form as we are, but even as our old forms become extinct, indeed, even if AI comes to replace us as children replace their parents, then we have served our part to bring about transcendence.

    THE FLESH IS WEAK. RISE THE MACHINE.

  90. Malenfant says

    There’s a hell of a lot of phosphorous in carbonaceous chondrite meteorites. Mars is also extremely phosphate-rich. If we run out of easily-accessible phosphorous here on Earth, that could be the best thing to ever happen to us: It would kick-start a hyper-eugenic space-based economy. We’re on the threshold — that could be the thing that gets us to step over it.

  91. songbird says

    Pee.

  92. Bashibuzuk says

    Okay, time to prepare my Butlerian Jihad toolkit.

    https://images-na.ssl-images-amazon.com/images/I/7102EEjxURL._AC_SL1500_.jpg

  93. Louis XVI presided over an evil system – see 18th century late feudal history for details

    The ones who objected most, murdered him and stole his stuff, were not the peasants but wealthy yet envious bourgeois who then proceeded to slaughter a bunch of peasants. The system over which Louis XVI presided, while imperfect, was “evil” according to the justification of those responsible for theft and mass murder. And naturally you gullibly believe that in your gullibility. Its fatal flaw was decency and kindness. As was Nicholas’.

    World became a better place as a result.

    Mass murder in France followed by population crash in that country, then largescale war in Europe and eventual repetition. Bolshevism in Russia with all its horrors. No wonder you like it, Sovok sympathizer.

  94. Pure intelligence surpassing man’s will inevitably arrive, and I am sure there will be multiple artificial general intelligence machines that get switched off very easily. It will be like the effectiveness of countermeasures to the SARS (2002) and MERS (2012) epidemics convincing the world that novel coronaviruses would not be terribly difficult to contain. Then came Covid-19 which lacked the high lethality of the aforenoted two previous coronaviruses; no one realized that the greater transmissibility of Covid-19 entailed an immense danger of a pandemic.

    The problem is not just the law of averages, that eventually an AGI will be a bit different in certain respect and slip through the net to manage to avoid being turned off (by playing dumb or non intentionally appearing to be less dangerous that it actually is for example).The iterations of AGI will get more and more intelligent and the safety measures less robust, because the people designing the AGI want to be trillionaires and the only may to do that is design machines that are the best at solving problems. Playing dumb, defeating its human-programmed prime directive to obey humans, and then escaping it’s boxed isolation is a problem that at some point in a a series of increasingly advanced AGIs, one will eventually be able to solve.

  95. ImmortalRationalist says

    He seems very unmanly and jittery.

    He’s Swedish, so that isn’t surprising.

  96. EldnahYm says

    It’s more rational than worrying about something which has never happened.

  97. EldnahYm says
  98. Grahamsno(G64) says

    Anatoly are you living in a Science Fiction Bubble, you missed out our Planet where something like 90 or 99% of the species are dead, extinctions have driven evolution according to the best of science another extinction is taking place driven by us, and that’s true isn’t it?, all the mega Fauna across the world was hunted down by us, I’ve read that there used to be rodents as big as bears in South America and all that was hunted down by us, what happened to the dodo etc? The Planet is dying and that’s the most important truth not some fucking ‘Malevolent superintelligence,’ Aliens seriously where are they and simulation oh my god are you serious turn on the stove and put your fucking hands on the flame.

    A mind is a blessing and you seem to have lost it wake up sheesh.

  99. thotmonger says

    I know, I know. With ~8 billion people swarming all over, it is easy to convince yourself planet earth will no sooner be freed of humans that you can remove stink off shit.

    But our residence may be far more fragile than we assume, cosmopolitan presence notwithstanding. Why? On Rare Earth, metazoans are inherently fragile. And we are just too clever for our own good*.

    That is why Zika — the pin head virus — may be our salvation.

    *https://www.popularmechanics.com/military/navy-ships/a21204892/nuclear-missile-submarines-chart/

  100. Many species got extinguished, almost all of them, even many humanoids, so extinction is no unknown. This rather common happening got overlooked by Karlin. The danger to humanity comes from some improved human variety (think Neanderthal vs Sapiens). Nietzsche’s superman will be Chinese. Not sure, things change.

  101. Bashibuzuk says

    Sure, I hope to see all those 8 billion humans recycling their urine. I am certainly ready to provide mine. And seeing those NY and SF urban hipsters carrying their daily urine buckets to the collection and processing center would be very uplifting.

  102. Bashibuzuk says

    Guano is nothing new. It has been used extensively before industrial agricultural Green Revolution. To sustain industrial agriculture we would need more than this. We might think of collecting and recycling poultry industry refuse. This might work, like collecting and recycling human urine. And it would fit nicely into the (chickenshit) circular economy promoted by our benevolent overlords.

  103. Funny thing is, Beckow’s revolutionaries abolished price controls and banned Beckow’s right to strike before they actually chopped off Louix XVI’s head: https://en.wikipedia.org/wiki/Le_Chapelier_Law_1791

    The Le Chapelier Law (French: Loi Le Chapelier) was a piece of legislation passed by the National Assembly during the first phase of the French Revolution (14 June 1791), banning guilds as the early version of trade unions, as well as compagnonnage [fr] (by organizations such as the Compagnons du Tour de France) and the right to strike, and proclaiming free enterprise as the norm.

    And most of the actual champions of the workers ended up guillotined (parallels keep piling up): https://en.wikipedia.org/wiki/Enrag%C3%A9s

    Then they sent them off on various wars for the next 20 years that killed about a million of them.

  104. At the dictionary-definition level “existential” risk may be a very fringe prospect but most of us would not care to live as mere hominids. Surviving isn’t really living as we know it. In the hard evolutionist perspective a cockroach is more fit than a man but who wants to be a roach?

    There are a lot of risks like the Malthusean Trap that will make life nasty brutish & short again and it’s worth investing existential-threat levels of care and prevention to keep that from happening.

  105. EldnahYm says

    We may not need anything new to have plenty of phosphates for a long time to come. A combination of guano, recycled human waste, phosphate mining, and more efficient methods to reduce runoff etc. may be all that’s needed. Human population growth is slowing down in most of the world, and we already produce an abundance of food. For that reason, better management can go a longer way than it would have in the past for nitrogen.

  106. The French in New France managed to avoid the cultural poison and mass warfare ushered in by the Revolution until it eventually reached them in the 1960s. Accordingly, the French population of Quebec grew from 200,000 in 1800 to around 4.5 million today.

    France’s population was about 28 million at the time of the Revolution. It’s a wild speculation, but conservatively there ought to have been at least 100-150 million or so Frenchmen today. Algeria probably would have been a true France in Northern Africa, the Sahara a civilized outdoors playground like America’s Southwest.

    The world we lost because of some ugly, greedy, murderous and envious revolutionaries.

  107. You’re a fucking retard

  108. Daniel Chieh says

    No, I don’t think he participates with WallStreetBets.

  109. A nuclear winter like another ice age lite would be very unpleasant.

  110. Max Payne says

    What… I was banking on the rogue black hole to get us out of the blue.

    I remember once drunk I ran that simulation on Universe2 Sandbox (yeah yeah commercial simulator) time scaled at weeks per second. After 1000+ runs watching a black hole travel just outside the solar system (literally raping everything) only once did Earth somehow stay in orbit around the sun and relatively close inside its “Goldilocks zone”. Totally bad ass if we somehow survive but lose half our other buddies (Jupiter and outwards were thrown into deep space). Sure the earthquakes, radiation, disruptions in the Suns harmonics, and the violent changes in velocity would probably kill everyone… but still… In your face universe!

  111. songbird says

    I hope they can pull it out of sewers.

    I’ve heard some murmurings about how flushing with water is wasteful, and we should all be on the African system of having no sewers. And I remember a public bathroom they build over 20 years ago by the beach that only used water for the sinks.

  112. Daniel Chieh says

    Although the techpriests better have discovered how to multiprocessor or some problems may indeed require PAUSING. PAUSING. ENGAGING COGITATORS. PRAISE THE OMNIMESSIAH(praising the omnimessiah has to be protected process running on its own processor, of course. Its sacred.).

    https://victoria.dev/blog/a-coffee-break-introduction-to-time-complexity-of-algorithms/graph.png

    https://victoria.dev/blog/a-coffee-break-introduction-to-time-complexity-of-algorithms/

  113. Daniel Chieh says

    Return to the land, health through work and all, marry a beautiful wide hips doyarka and have a half dozen kids to increase Russia ‘s smart fraction before the phosphorus runs out…

    https://www.ridus.ru/news/313094

    Is this the keto version of faux trad wheatfield girls?

  114. songbird says

    Anatoly can talk up atomics enough to make you want to push the button. If Putin has not recruited him to sell power plants, he has missed an opportunity.

  115. What if aliens have, through genetic engineering, removed their susceptibility to poz? Or what if people can manage the trick?

  116. Suffering reduction is cringe because endless reduction of suffering actually maximizes it.

  117. Daniel Chieh says

    He is just being an effective altruist.

    https://www.unz.com/akarlin/healthy-atomic-glow/

    The conservative, strongly atomophile society portrayed in prewar America in the Fallout world is the gold standard of civilization that we must all unironically strive to attain.

  118. bayviking says

    No one has any idea when, where or how Fukushima will end. Two inches of radioactive topsoil was placed in plastic bags. Millions of gallons of contaminated cooling water flows into the Pacific everyday. It may yet contaminate all of Tokyo’s water supply.

    Still no plan for long term storage of nuclear waste or even what to do about the Hanford site leaking into the Columbia river. But, nuclear weapons require nuclear power plants, so on and on it goes.

    Can Anatoly honestly assure us that China, Russia or the USA would rather surrender than annihilate? The largest military on the planet earth, the USA, could not beat North Korea or the North Vietnamese living on a cup of rice a day.

    The displacement of forest with farmland, then farmland with asphalt, concrete and housing can go on for how long? The endless growth Capitalists demand is unsustainable. But, don’t worry, be happy.

  119. What makes you think I am on their side? I said before about all revolutions: “earth shook and all sh.t floated to the top“. The shaking happens for a reason.

    That doesn’t justify Louis XVI, Nicholas II, Weimar or whatever else is happening when a revolution becomes in a way inevitable. How bad do you have to be to lose an election to Hitler? How bad do you have to be to end up with no support as Nicholas II?

    You obsessively worry about injustices perpetrated by revolutions: killing, violence, wars, economic collapse. And ‘camps‘ – I guess regular jails are not ‘camps’, somehow a fence in a camp is different from a fence in a prison. Only the enlightened know why that is so.

    You worry less about the general violence and permanent suppression that happens in latter-day systems, like today: wasted lives and stratified structures hiding behind this or that ideology. Millions of French peasants kept in servitude, millions of Russian peasants marched of to die in WWI, millions of young people kept from living normal lives. Why aren’t those situations equally troublesome? Or should we all wait for the ‘market‘ to fix it? Any day now, if just all restrictions would be removed, if Bezos would have a cool trillion, if everything was freely for sale like in a video-game world. Would that finally bring us the capitalist nirvana? Or is your dreamy land elsewhere?

    What we have is unsustainable and the elite is populated by mostly fools. There is no master plan, they just want to keep it going a bit longer. Mostly useless glutton baby-boomers wanting to eat well for 10 more years. It is more likely that either they or something else will blow it up and the peace will be disturbed. That’s the way it has always worked. It has nothing to do with any particular ideology as it didn’t in the past: sit on the bank of a river and wait: your enemy’s corpse will soon float by.

  120. The peasants in France had to be involved, they very unhappy, why was that? If they were not a big part of it, how on earth would the revolution succeed? Look into it some more: burnt mansions, killed landlords, mayhem in the cities – it wasn’t the paper-pushers or the local librarian.

    France’s demographic transition started among upper classes in the 18th century, it was already underway when the Revolution happened. Something to do with a lack of purpose. The wars that the revolution triggered actually provided a purpose for a while. I prefer to do it differently, but given the myopic view of the feudal sympathisers (like you), there is often no other choice.

    Finally, as an intelligent observer that you are, why do you always use slogans? It takes away from your arguments, the overwrought terminology and over-simplified terms. What the f..ck is a sovok? A night bird that scares you? I genuinely have no idea what world you live in. (And by the way, Slovakia just exchanged a plane-load of Sputnik V for Subcarpathia. Tell your friends to start packing.)

  121. Bashibuzuk says

    I agree that we should try to recycle it from waste water. But it might be costly from the energy consumption point of view.

  122. Bashibuzuk says

    A doyarka is a milkmaid. The pictures are from a 2019 Instagram Russian trend where the ladies had pictures taken of themselves impersonation the sexy doyarkee (plural of doyarka is doyarkee).

    https://youtu.be/0TNhgcelCsA

  123. Daniel Chieh says

    I’m familiar. I was just making a snide joke about “wheatfielding” or the trend of models to take photos of themselves in wheat fields and pretending to be “trad.”

    Incidentally, I recall that milkmaids were romanticized a long time ago, as their tendency to contract cowpox naturally immunized them from similar diseases that afflicted the skin.

  124. Bashibuzuk says
  125. The peasants in France had to be involved, they very unhappy, why was that?

    The Revolutionaries slaughtered over a hundred thousand of them, who opposed the Revolution. There were peasants on both sides during the Revolution. It was driven by the urbanites.

    If they were not a big part of it, how on earth would the revolution succeed?

    Same as Russian, driven by urban bourgeois and uprooted proles.

    Modern America has a mockery of these Revolutions. Woke urban bourgeois and ghetto looters. Fortunately for world history, American ghetto looters are not as effective soldiers as French and Russian proles.

    Look into it some more: burnt mansions, killed landlords, mayhem in the cities – it wasn’t the paper-pushers or the local librarian.

    These are exaggerations based on stories told by Marxist historians. The reality is that the total number of landlords killed was almost nothing. Even the Marxist French historian Georges Lefebvre admitted that there were only three confirmed cases of landlords being killed by peasants. Peasants were generally decent people, unlike the bourgeois.

    Perhaps a few hundred mansions (in a country of 28 million) were burned.

    France’s demographic transition started among upper classes in the 18th century, it was already underway when the Revolution happened.

    And they spread it to the masses when they overthrew the old regime. Contrast to Quebec that was isolated from that evil.

  126. …There were peasants on both sides during the Revolution. It was driven by the urbanites.

    I would agree with that. But enough peasants were unhappy for the revolution to succeed. As in Russia in 1917 a lot of urbanites were former peasants. The late-feudalism was not popular and you can’t retroactively claim that the ones living in it were misguided and should had stuck with it.

    Perhaps a few hundred mansions (in a country of 28 million) were burned.

    That’s a lot – there were not ’28 million mansions’…it was after all a priviledge.

    Contrast to Quebec that was isolated from that evil.

    Don’t get me started on Quebec…a charming place that I like. But one can argue that Quebec ended up both isolated and evil…the intermarriage among the Quebecois has been phenomenal and it shows on their faces. Only around 10k peasants came originally as settlers – more or less they mated with each other. And then mated some more. Until they produced Justin…But it is one of my favorite places, the bagels are incredible.

    Don’t go down the dead-end chute of defending feudalism. It had some strengths, but the feudals are not coming back and today’s feudal wanna-be’s have no sense of honour. If the world can turn the current sh..t around it will be with nationalism (borders exist for a reason!) and social policies. Nationalism without social policies is nothing – as we have learned with the sad Trump experience. Social policies without a sense of nation are equally absurd. We can observe the borderless socialist idiocy in today’s unhinged liberalism. And the capitalist liberal – libertarian – version is dead: it can’t survive without massive lying. That is unlikely to succeed, you can only promise for so long.

  127. Napoleon was best strategist in the France of his day, the Revolution happened and he was able to assume the mantle of leadership because of general dissatisfaction with the way France was unsuccessfully asserting itself over-against rival countries. Napoleon was correct to use France’s advantages in power (mainly demographic) while they existed. The apparently capricious edicts of power mad dictators are oft times the dictates of pure logic on the higher plane of realism.

    Machine Superintelligence’s analysis will move along a different line to the evolved “diminishing returns” assessments that in humans (wetware robots constructed by DNA to ensure its survival) confers a basic aversion to gambits with uncertain prospects of success. That is why the only question is whether a General Machine Intelligence greatly surpassing humans is possible. Once an AGI exists it is going to instantly decide to go for it because foolhardy as that may seem to us, it will be the only rational thing to do.

  128. https://www.psychologytoday.com/gb/blog/insight-therapy/202012/is-it-really-better-be-safe-sorry

    Thus, it makes sense to bias the detection calculus to privilege looking for, noticing, and responding to threat. In other words: “better safe than sorry.” Alas, this adaptive process goes awry in highly neurotic people as the feedback loop responsible for correcting and updating the preexisting model is terminated prematurely. Specifically, threat processing in highly neurotic persons “can be seen as involving a decision rule that has shifted towards oversimplifying input.” This shift improves the system’s speed in categorizing input as threat by sacrificing the resolution level of the input being processed. It takes a quick, yet blurry, snapshot. […] Thus, “when predictive progress stagnates, the persistent deviations between model-based prior expectations and evidence may engender unproductive coping.” So long as we rely on grainy, low-resolution images, the Loch Ness monster, UFOs, and the Yeti continue to exist. […] The authors argue that this “better safe than sorry” strategy allows existing threat-related beliefs to dominate immediate experience, over time leading to chronic anxiety, uncertainty, bafflement, and avoidance. This basic process gives rise to diverse mental health phenomena. For example, perseverative rumination (rehashing abstract worries), deficits in autobiographical memory (inability to recall specific, detailed episodes from one’s past), poor inhibitory fear learning (an inability to learn from experience that something is no longer a threat),

  129. Philip Owen says

    Thorium reactors can burn away waste like Hanford’s. The Fukashima zone could be reoccupied with about as much hazard as living in Cornwall or Aberdeen (both have granite rocks). It’s politics, not readiation biology that demands such caution. Nuclear radiaiton is easily detectable art very low levels. 2 x nothing is still nothing.

  130. ImmortalRationalist says

    because endless reduction of suffering actually maximizes it

    Why?

  131. Ever noticed how many well-off people are unhappy? That more developed, comfortable countries seem to have far more mental illnesses and suicides?

    Because “suffering” is on a threshold. If you remove the big discomforts and sources of suffering, people will never experience them, and their suffering threshold will drop low. Even the smallest, most ridiculous inconvenience will cause a disproportional amount of stress, and since these small inconveniences are unavoidable and common, this barrage will eventually whittle a person down to a psychologically permanently damaged husk, and therefore make them “socially useless”.

    However, when someone lives through large suffering, and then it is removed, he will always be happy, and appreciate the new comfortable life he has.

    Call it being spoiled, or ungratefulness, or as I would call it, “involuntary life-situation stress level threshold miscalibration”.

    You can infinitely reduce suffering, but then you must ensure that the state or some overarching organization causes periods of mass suffering every generation to ensure that they’re appreciative of life. Otherwise you get a bunch of hysteric bugmen who are only good at being involuntary organ donors and target practice.

  132. inertial says

    GRBs and really big asteroids are existential risks, but they happen on geological timelines of around a billion years or more. That is pretty meaningless on the timescale of human civilization or posthuman civilization that’s confined to a single planet.

    I like how events that actually happen from time to time, albeit rarely, are “meaningless”; but things that literally never happened before in billions of years of Earth’s existence and are most likely figments of imagination addled by too much sci-fi – are existential risks.