Neural Augs Are Hard

Silicon Valley’s tech oligarchs are becoming increasingly interested in brain-computer interfaces.

The WSJ is now reporting that Elon Musk is entering the game with a new company, Neuralink.

At the low end, they could improve function in patients suffering from diseases such as Parkinson’s, which is the modest aim that the first such companies like Kernel are aiming for. However, in the most “techno-utopian” visions, they could be used to raise general IQ in healthy people, integrating people directly into the Internet of Things and perhaps even helping bridge the gap between biological and potentially runaway machine intelligence (Elon Musk is known to be concerned about the dangers of unfriendly superintelligence).

Well, best of luck to them. Deus Ex is a cool universe, and in ours, it doesn’t even look like the buildup of glial nerve tissues is going to be an issue in ours.

So, no Neuropozyne addicts, at least. But there are other, more directly technical, reasons why implants are going to be really hard to get right, as summed up by Nick Bostrom in his book on Superintelligence.

This brings us to the second reason to doubt that superintelligence will be achieved through cyborgization, namely that enhancement is likely to be far more difficult than therapy. Patients who suffer from paralysis might benefit from an implant that replaces their severed nerves or activates spinal motion pattern generators. Patients who are deaf or blind might benefit from artificial cochleae and retinas. Patients with Parkinson’s disease or chronic pain might benefit from deep brain stimulation that excites or inhibits activity in a particular area of the brain. What seems far more difficult to achieve is a high-bandwidth direct interaction between brain and computer to provide substantial increases in intelligence of a form that could not be more readily attained by other means. Most of the potential benefits that brain implants could provide in healthy subjects could be obtained at far less risk, expense, and inconvenience by using our regular motor and sensory organs to interact with computers located outside of our bodies. We do not need to plug a fiber optic cable into our brains in order to access the Internet. Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing. Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence. Yet if one had a human-level AI, one could dispense with neurosurgery: a computer might as well have a metal casing as one of bone.

Not only is there this seemingly insurmountable computing capacity problem, but there is also an equally daunting translation problem.

But what about the dream of bypassing words altogether and establishing a connection between two brains that enables concepts, thoughts, or entire areas of expertise to be “downloaded” from one mind to another? We can download large files to our computers, including libraries with millions of books and articles, and this can be done over the course of seconds: could something similar be done with our brains? The apparent plausibility of this idea probably derives from an incorrect view of how information is stored and represented in the brain. As noted, the rate-limiting step in human intelligence is not how fast raw data can be fed into the brain but rather how quickly the brain can extract meaning and make sense of the data. Perhaps it will be suggested that we transmit meanings directly, rather than package them into sensory data that must be decoded by the recipient. There are two problems with this. The first is that brains, by contrast to the kinds of program we typically run on our computers, do not use standardized data storage and representation formats. Rather, each brain develops its own idiosyncratic representations of higher-level content. Which particular neuronal assemblies are recruited to represent a particular concept depends on the unique experiences of the brain in question (along with various genetic factors and stochastic physiological processes). Just as in artificial neural nets, meaning in biological neural networks is likely represented holistically in the structure and activity patterns of sizeable overlapping regions, not in discrete memory cells laid out in neat arrays. It would therefore not be possible to establish a simple mapping between the neurons in one brain and those in another in such a way that thoughts could automatically slide over from one to the other. In order for the thoughts of one brain to be intelligible to another, the thoughts need to be decomposed and packaged into symbols according to some shared convention that allows the symbols to be correctly interpreted by the receiving brain. This is the job of language.

In principle, one could imagine offloading the cognitive work of articulation and interpretation to an interface that would somehow read out the neural states in the sender’s brain and somehow feed in a bespoke pattern of activation to the receiver’s brain. But this brings us to the second problem with the cyborg scenario. Even setting aside the (quite immense) technical challenge of how to reliably read and write simultaneously from perhaps billions of individually addressable neurons, creating the requisite interface is probably an AI-complete problem. The interface would need to include a component able (in real-time) to map firing patterns in one brain onto semantically equivalent firing patterns in the other brain. The detailed multilevel understanding of the neural computation needed to accomplish such a task would seem to directly enable neuromorphic AI.

As for learning a mapping using the brain’s native capacities… well, we sort of already do that, and through methods that have the advantage of not being evolutionarily novel.

One hope for the cyborg route is that the brain, if permanently implanted with a device connecting it to some external resource, would over time learn an effective mapping between its own internal cognitive states and the inputs it receives from, or the outputs accepted by, the device. Then the implant itself would not need to be intelligent; rather, the brain would intelligently adapt to the interface, much as the brain of an infant gradually learns to interpret the signals arriving from receptors in its eyes and ears. But here again one must question how much would really be gained. Suppose that the brain’s plasticity were such that it could learn to detect patterns in some new input stream arbitrary projected onto some part of the cortex by means of a brain–computer interface: why not project the same information onto the retina instead, as a visual pattern, or onto the cochlea as sounds? The low-tech alternative avoids a thousand complications, and in either case the brain could deploy its pattern-recognition mechanisms and plasticity to learn to make sense of the information.

Unless and until Elon Musk clearly explains how his “neural lace” is going to get around these issues, we should treat it with the skepticism it warrants.

Contra /pol/, Musk’s achievements are indeed tall, but contra /r/Futurology, the hype around him is ten times taller.

Anatoly Karlin is a transhumanist interested in psychometrics, life extension, UBI, crypto/network states, X risks, and ushering in the Biosingularity.

 

Inventor of Idiot’s Limbo, the Katechon Hypothesis, and Elite Human Capital.

 

Apart from writing booksreviewstravel writing, and sundry blogging, I Tweet at @powerfultakes and run a Substack newsletter.

Comments

  1. Unless and until Elon Musk clearly explains how his “neural lace” is going to get around these issues, we should treat it with the skepticism it warrants.

    So, it is not true that you are there to exhume Czar Nicholas II and family in order to obtain the DNA necessary for the cyborg project?

  2. jim jones says

    I am playing Deus Ex: Mankind Divided at the moment, while the Augs are useful in combat they do not appear to be any use in everyday life.

  3. Not even CASIE?
    For many wearers of glasses, retinal prosthesis would be quite useful.

  4. Copying my comment here:
    Strongly agree with all your comments, and I think one could make additional substantial criticisms re: Musk’s ideas. That said, there might be some hope left.

    I. Further criticisms:

    A friend of mine (EvoBio, Developmental Neurosci) had some really insightful comments on how Musk’s vision doesn’t seem to be compatible with the brain’s information topology:


    Elon … claims the thing he dislikes the most is how limited we are on output. He correctly points out that we use meat sticks to push buttons and that does not give us nearly as much speed as our input systems like vision do, and certainly not as much as the output a computer can have, which hinges on terabytes.

    Then he proposes a solution, his solution is a “direct bandwidth interface.” In particular he is interested in a direct third layer that communicates directly and is morpholly symbiotic with the rest of us.

    This is a common dream among people who are excited about transhumanism, but it is also neurologically impossible.

    What makes the high layers of cortex have the function of operationalizing concepts we associate with “higher cognition” is not their proximity to a self that is accessible if only we had the right tool to insert into our brains, it is just their relative position in the middle of the stream of information that enters through the sensory system and leaves through the motor system. If you are halfway in between those two, you’ll be processing that high level information.

    If you just insert a new radio interface with the entire high level layers of cortex, this would only give you a lot of disjoint unreal high level intuitions in the best of cases.

    Evolution spent millions of years creating the input systems that enable our brain to receive the right amount of info in the right places and process it, but it did so by connecting the cables directly, without a middle man or a homunculus there in the middle to receive and pass on the information. So you can’t find a better tool to give to the homunculus in the middle of the room, because there is no homunculus.

    >So Musk’s dream is impossible, and it makes me sad to no end to be able to see the impossibility of a big dream of the world’s biggest living dreamer. It sincerely brings tears to my eyes.

    II. Mixed praise & criticism:

    Another friend (computational psychology / neuroscience) on how Musk’s nominal approach seems mistaken, but follows a pattern that has created value in the past:


    It seems like Musk consistently does really sensible things for reasons that are non-sensible pragmatically, but extremely sensible ultimately, but with the non-sensible pragmatic considerations being pragmatically useful for gathering publicity and support for his sensible concrete steps.

    Space-X: Trying to beat bloated government bureaucracies for getting into orbit is highly sensible for the sake of launching Internet and satellites (and maybe asteroid mining), but doing so in order to create a near-term Mars colony doesn’t make a lot of sense (considering that we haven’t even colonized the oceans), but ultimately wanting to have an extra-earth home for humanity is extremely sensible for hedging against planetary existential threats.

    Neuralink: Trying to advance the state-of-the-art for brain-computer interfaces is highly sensible for the sake of helping people with neurological conditions (especially considering the rapidly aging boomers), but doing so in order to enhance normal brains to keep us from being outstripped by advanced artificial intelligence doesn’t make a lot of sense (considering that representations are likely idiosyncratically organized in individual brains, and nanotechnology doesn’t appear to be anywhere close to be able to interface with neurons such that normal functioning could be enhanced, and the risk of brain implants is non-trivial), but ultimately wanting to create such interfaces is extremely sensible since it would be great to be able to expand human intelligence.

    I deeply admire Musk. I just don’t think some of his plans make sense in the way that they’re being pursued. But I think they’re really good plans for other reasons, which he also recognizes, because he’s brilliant, but these other reasons don’t seem to be what motivates him most strongly.

    III. Maybe the glass is half-full?

    My intuition is that all these criticisms hit the mark, but overlook one thing: the future of BCI shouldn’t be judged on what it can’t do, but rather on what it can do. And it might only take a few wins to make brain-computer interfaces worth it. E.g.,

    • Perhaps there’s various kinds of valuable state information that is floating around in the cortex that doesn’t reliably make it into language or action, but could be measured by neural lace (emotional information?);
    • Perhaps there’s various kinds of metadata or offline-computable data that would greatly enhance performance on some tasks if directly ‘injected’ into certain parts of the brain;
    • Perhaps being able to get an objective ‘read out’ of what the cortex is doing could drastically improve diagnosing & troubleshooting various cognitive failures / mental blocks (e.g., imagine if your meditation teacher, or math professor, could get a real-time read-out of what your brain was doing).

    I think at least half the challenge with BCIs will be to figure out creative & clever ways to put them to use. All the theoretical hurdles in the world don’t matter if we can find One Important Thing that BCIs can do.

    Open question/challenge to BCI people: how can qualia research help you guys? What sort of knowledge about consciousness & qualia dynamics would help you do what you want- or figure out what you should want to do?

  5. ussr andy says

    I think this is all LARPy nonsense noone should waste their time on. Just Lefty elites telling people not to trust their puny (natural) brains and believe their lying eyes as they continue to socially engineer the world into a “sponsored content”, perpetual Brazil, future. Any moment spent not thinking “what is to be done” and “who is to blame” and “how to fix Russia” is a waste

  6. Too funny this article as it shows that the author is clueless about BCI and its history, evolution, and unawareness in where we actually are today with BCI. We already have BCI implants in various patients and have been doing BCI implants since 97 to help restore motor skills, improve vision, speaking, memory repair, etc. In fact, we have BCI devices that you can buy commercially on Amazon and other online stores to wear and communication to you tablets, and laptops to develop apps, enable AR, brain stimulation, etc. And, the prices range from $99 to $800. So, I advise authors of these articles to do their homework because they look stupid/ ignorant and really ill-informed.

  7. Thanks, this is a very good comment.

    Nothing I can really disagree with there. This way of looking at Musk’s projects (sensible steps with unsensible goals that are ultimately sensible, and sensible from a PR view) is… sensible.