Prosthetic connectivity
Summary: adding artificial connections between distant areas of the brain might increase intelligence in two ways. The first way is by simply increasing connectivity in areas that perform abstract thinking; since evolution was clearly bottlenecked on connectivity, that might be valuable to the brain. The second way is by reprioritizing brainware according to our values in our current environment. Prosthetic connectivity seems bottlenecked on a bunch of nitty-gritty (bio)engineering work that's on the mainstream BCI pathway.
Thanks to Guy Wilson for a conversation about this.
- 1. Prosthetic connectivity
- 2. Safety
- 3. IA via prosthetically increased connectivity
- 4. IA via offloading and neurogenesis
- 5. IA via prosthetically reprioritized connectivity
- 6. IA via prosthetic networking
- 7. IA via intertemporal communication
- 8. Obstacles and questions
- 9. Contrast with computer-in-the-loop BCIs
Generated by stable diffusion with https://replicate.com/stability-ai/stable-diffusion. Prompt (with parameter tweaking and then feeding output as image prompt): wire implants on the surface of a human brain. photorealistic, detailed, 4k. trending, aesthetic. by greg rutkowski, Donato Giancola
1. Prosthetic connectivity
Prosthetic connectivity is artificially enhanced neuron-neuron connections between different sites in a brain. Brain-computer interfaces connect neurons via electrodes to computers; prosthetic connectivity connects the brain to itself. The basic idea is:
- Surgically install electrodes in the cortex (or elsewhere, with more difficulty) that can read electrical signals from neurons (recorder-electrodes).
- Also install electrodes that can electrically excite neurons (exciter-electrodes).
- Connect these sets of electrodes to each other, e.g. with wires or EM transmissions.
- When a neuron connected to a recorder-electrode fires, the recorder-electrode detects that firing, transmits it to some exciter-electrodes, the exciter-electrodes activate in some way, and then any neurons connected to the exciter-electrodes are excited.
2. Safety
I don't recommend trying any of this stuff, at least without much more study; it's all pure speculation. One would want to do lots of animal trials to test safety. Also, to avoid mental damage, one would want to titrate new connectivity by activating only a few new connections at a time, giving the brain time to adapt (or reveal problems in a small way before a big way).
3. IA via prosthetically increased connectivity
Evolution was bottlenecked on connectivity in the human brain: more than half of the cerebrum is white matter, myelinated long-range axons.
These axons are pretty fast: they take something like a few milliseconds to transmit a voltage change from one hemisphere to the other, which is comparable to the "clock speed" of neurons, which take something like a few milliseconds to return to their resting polarization. But, these axons are also thick. The voluminousness of fast long-range axons limits the extent to which distant areas of the brain are connected: there's not enough room for all those possible axons. Wires, or EM signals, are far more space-efficient [citation needed] for transmitting data long-range (say, further than a centimeter).
Since evolution was bottlenecked on connectivity, adding significantly more connectivity——letting more neurons talk to each other across long distances——might increase intelligence. Since wires are more space-efficient than axons, evolution's constraint is lifted, and more connectivity could be added without taking up much space.
Obstacles
-
The basic technology of BCIs is difficult, see below.
-
There are already very many long-range connections between brain areas. It's claimed that there are over 200 million axons in the corpus callosum, and there are also many (how many?) intrahemispheric association fibers. So to significantly increase intelligence by adding new connections might require adding millions of connections. That would be a very difficult engineering problem even if it were feasible, but it might not be straightforwardly feasible at all: it could be that any way we add electrodes would cause some non-trivial local damage, and multiplying that by millions would add up.
-
Maybe neurons are collectively already mostly near their maximum ability to compute, so they aren't bottlenecked on connectivity. Especially e.g. the thalamus may be pretty much at maximum usage. (Is that right?)
-
Maybe the brain already makes what connections it knows how to use. For example, it could be that the brain starts with more long-range connectivity than the adult brain actually uses. Then adult thinking, if it should need more connections, has disused ones it can pick up; so more connectivity wouldn't be useful.
-
Maybe it's a question of how to think, and less a question of hardware limitations. Maybe the useful configurations of neurons communicating are already there, or could be implemented with existing hardware, but the person has to figure out how to think that way. (But, this doesn't make sense of genetically heritable variation in intelligence; surely that's mediated by brainware.)
Questions
Are those obstacles real? Can they be circumvented?
What were the microeconomic tradeoffs that evolution faced when evolving the brain, e.g. what can we say about the relative benefits of using space to grow more neurons vs. more long-range axons? In some sense the tradeoff should be balanced, but what does that imply about the benefit of adding connectivity or new neurons?
Would there be much benefit to the faster transmission of prosthetic connectivity? Would it matter if neurons on opposite sides of the brain could communicate near-instantly rather than at a delay of one or two clock-ticks? (And are those numbers about right?) Hypothetically this could matter a lot, since these differences in latency are large as a fraction of the time taken to transmit inter-hemispherically, and if there are situations where a lot of back-and-forth would help, that back-and-forth might go from taking infeasibly long to being bottlenecked only on the somas. Then again, sets of neurons like this might already have formed blobs within one hemisphere, and if more are needed, they'd just form.
4. IA via offloading and neurogenesis
This is an extension of increased connectivity. The idea is that maybe the brain would learn to use the prosthetic connectivity in place of long axons. If the long axons then shrink because of disuse, that might open more space in the brain. Then one could induce neurogenesis (somehow), and there would be room for many new neurons to grow. Would this happen? Would long axons not being used shrink? How much space might that open up? What would happen if that space were filled with new neurons? Do we know how to induce neurogenesis like that? Do we know how to induce the growth of other brain cells like such as glial cells?
5. IA via prosthetically reprioritized connectivity
Aside from lifting constraints on intelligence faced by evolution, we might improve intelligence by reallocating resources to improve cognitive functioning according to our evaluations, at the expense of cognitive functioning as evaluated by fitness in the environment of evolutionary adaptedness. This could be done by adding connectivity where previous there was little or no connectivity. Abstractly, this might have a large effect on thinking style because it takes some gross connectivity (aggregate connections between broad brain areas) from roughly zero to definitely non-zero; this might be the case even if the number of connections is small, such as tens of thousands, which might be more technologically feasible.
As two speculative examples:
-
We might repurpose perceptual brainware to perceive mental data rather than its native data by piping mental data to it. For example, we might take readouts from the prefrontal cortex or association cortex and output those signals to somewhere in the visual cortex, putting the massive apparatus of visual perception to work decoding and tracking patterns in thinking. That might give the person "direct insight" into their thinking, so to speak. (See discussion below for why this is different from giving an augmented reality visual readout of neural signals. Also, if there are bidirectional connections, that would plausibly allow co-adaptation between the perceived and the perceiver that wouldn't happen through the much more indirect sensory path.)
-
We might repurpose manipulative brainware to manipulate other brainware more directly, or through an additional channel with different skills, than existing manipulative brainware can do. For example, we might give part of the motor cortex or part of the cerebellum direct input access to prefrontal or associate cortex. Maybe a person with that connectivity would be able to learn to e.g. perform types of thinking that require more design activation energy——more parts combined in a way that isn't useful when only partially constituted——than normal thinking, because the brainware for learning sequenced hierarchical motor skills would be directed at the task of building combinations of thought-parts.
Obstacles
-
Maybe there just aren't connections like this that would be helpful. Maybe the brain already has, at least qualitatively, all the connections that would be useful.
-
Some of these, such as taking output from the cerebellum, would require deeper brain surgery, which is much riskier.
-
Maybe the visual system wouldn't extract information that's more useful to abstract thinking than what abstract thinking already has access to (since abstract thinking can directly access the other neurons involved in abstract thinking).
6. IA via prosthetic networking
Maybe one could connect a brain to other external brainware:
-
Connecting to another person. This might not be much more interesting than communicating by language, but since the bandwidth would be much higher (even with a likely-feasibly-small number of channels), something interesting could emerge.
-
Connecting to neurons kept alive in an artificial environment. This would, in theory, be like getting more brain. The external brain could, in theory, be connected within itself in much more exotic ways than would be feasible within a person's brain in their skull. Can large numbers of neurons be kept functioning in vitro for long periods of time?
7. IA via intertemporal communication
Brains as they are have various ways of communicating with themselves across time, e.g. stored memories, learned neural coadaptations that make the same thought or skill more easily recovered, externally stored writing and other records, and information shared with other people and then echoed back later.
Why stop there? Maybe one could "save and reload" thoughts or other cognitive states by having a bunch of pairs of recorder- and exciter-electrodes in various places (in, say, association cortex?). The person manually hits a record button, which records the current state for a while; then later the person can hit the playback button, and the exciter-electrodes activate in the pattern corresponding to what they read. Maybe this saves much more of the implicit state of a thought than is saved by the brain by default, or saved through recorded language.
(See TAP, a novelette by Greg Egan.)
Obstacles
-
Maybe the "meaning" of the recorded and played back neurons is unstable over time.
-
Maybe randomly recording neurons doesn't capture enough about a thought to recover it.
-
Maybe playing back neurons doesn't much improve over what already happens.
8. Obstacles and questions
-
What are other ways that prosthetic connectivity might be used to significantly increase intelligence?
-
A main obstacle is the basic technology: designing and manufacturing the electrodes to be non-immunogenic, non-inflammatory, reliable, small, not killing too many neurons, and so on. I don't know about this, and G.W. says it may be mostly bottlenecked on very substantial engineering work (that is, work that requires, not weird ideas, but concrete design and experimentation). Is this right? What research is and isn't happening by default in the mainstream BCI field?
-
Another main obstacle is that these electrodes, even when they're very well-optimized, might still damage some neurons when they're installed. For IA methods that involved creating millions of connections, that level of damage might be too much (e.g. destroying .1% of cortical neurons). How small and non-destructive could these electrodes be made? What are the exact mechanisms of damage?
-
Electrodes only record and send electrical signals. Does this imply that they cannot send inhibitory signals? What sorts of pathways are activated by normal synaptic neurotransmitters? What might go wrong if a neuron is over a long period of time only receiving electrical stimulation, without neurotransmitters? Are normal long-range axons, e.g. in the corpus callosum, less reliant on chemical signaling? (Is there any way to, at scale, simulate targeted chemical stimulation rather than just electrical stimulation??) Neuralink's monkey could play pong using only electrical readouts, so in some cases this is ok.
-
Electrodes only send blunt information: active or not. In particular, they don't send different information to different synapses at different times. Would this restrict the sort of signals that can be sent?
-
Electrodes don't have peri-synaptic responses. Some post-synaptic neurons have kickback responses, where they locally regulate pre-synaptic activity, and other dendritic circuits. This couldn't happen with electrodes. How common is this sort of circuit? What about in long-range synapses? What about in association, visual, or prefrontal cortex?
-
Electrodes don't have upstream activation. Do the responses of post-synaptic neurons normally induce changes in the pre-synaptic neurons? How are those mediated? How important are they, e.g. for learning and homeostasis? Is there any way to implement at least the electrical backwards-signal, e.g. simply with another input-output pair in the opposite direction? (This wouldn't help with backwards regulation that varies synapse to synapse.) Do electrodes normally interface with multiple neurons (so that this backwards signal creates a lot of chatter)?
-
Deeper brain structures such as the cerebellum and thalamus are hard to reach surgically. How bad is the situation; how dangerous is it for how much access?
-
How do "commands" and "control" work at the neuronal level? What determines when an area takes input from another area as "controls", rather than something "to use or ignore, for its own purposes"? E.g. the early visual cortex mostly is controlled by the retinal signals; a muscle is controlled by the motor cortex; but on the other hand thoughts can control each other non-hierarchically, and can push against each other (for example, singing a song in your head to ignore what someone is saying). What feedback / coadaptation does control rely and not rely on?
-
Do the cerebellum's outputs pass through a small bottleneck? If so, that might be a spot where relatively few electrodes would give high leverage.
-
Are there tradeoffs that are optimized differently for standard BCI vs. prosthetic connectivity? Can those be used to get much larger numbers of electrodes? E.g. maybe the receptive fields for standard electrodes need to be bigger because they're for robust outputs, whereas prosthetic connectivity asks for connections between single neurons? Is there some way to bundle electrodes together, so that they can mechanically behave like one electrode but be synapsed to and fired separately?
-
Are wires actually space efficient compared to axons? Could wireless electromagnetic signals be used? Would that damage the brain?
-
Could a exciter-electrode be made to detect whether there are neurons that listen to the input they get from it? Then one could gradually reset exciter-electrodes that aren't being listened to, and instead have them send signals from a different source.
-
What if you allow the brain to modulate which recorder-electrodes map to which exciter-electrodes? Does that open up more promising possibilities?
-
What is the usual output size (number of post-synaptic neurons) of an axon? Are there potential gains from greatly increasing that maximum size, by echoing the output to many exciter-electrodes?
9. Contrast with computer-in-the-loop BCIs
Some kinds of BCIs:
-
Human-computer: a device that reads out neural signals into a computer. Removes burden of physical action to make outputs, and accesses hidden info about neural firings.
-
Computer-human: a device that outputs signals sent from a computer to some neurons. Removes burden of sensory apparatus, and potentially makes more inputs processable.
-
Human-computer-human: a device that does some computation to transform some neural signals and then outputs them to some neurons.
Prosthetic connectivity is a sort of trivial case of human-computer-human: reading from some neurons and then outputting those signals to other neurons without modifying the signals. Prosthetic connectivity, in contrast to nontrivially computer-in-the-loop BCIs, potentially could leverage neuron-neuron learning, i.e. sending upstream kick-back signals from the receiving neuron to the sending neuron that modulates the sending neuron (as, I assume, happens in real neuron-neuron synapses).
To contrast prosthetic connectivity with brain-computer interfaces, we can ask: can computer-in-the-loop BCIs significantly increase intelligence?
Removing motor burden
This is the standard use case.
I mostly don't see a way to use outputs to greatly increase intelligence. But there might be interesting effects from having fast feedback loops with computers. For example, suppose you look at a computer screen while sending a bunch of commands to tweak parameters of a computation, such as displaying a probability distribution or displaying a vector field or inverting a matrix or something. Maybe that makes some interesting human-computer thinking qualitatively feasible that would just be too much work without the output BCI. (This could be combined with inputting the results of e.g. a matrix inversion to the brain, in a true human-computer-human interface.)
Removing sensory burden
Instead of using large sensory apparatuses (mainly the visual cortex), structured data could be more directly piped in. E.g. instead of reading, a sequence of words (as one-hot vectors or as sequences of letters) could be directly piped somewhere (and then the brain could be trained to decode those signals into words). This doesn't seem that great, and probably is basically the same as listening to an audiobook, unless it could be rigged to scan around in the text according to motor outputs (collected from another BCI).
Really expensive augmented reality
There's a whole class of things one could try with computer-to-human BCIs that could be glossed as "augmented reality": taking some information from a computer that's maybe useful, and then outputting it to some neurons. Then the person has access to that information without having to, say, look at a screen.
Many of these are interesting, but interesting as ideas for augmented reality. E.g., you could display a compass, or make an infrared vision overlay. They might work better as BCIs, e.g. because you can use them with your eyes closed and don't need special glasses, but the difference would be very very not worth the cost of brain surgery.
Inputs that are hard to sensorily encode
Still, there could be some kinds of inputs that are hard to encode in a normal sensory space, and that would therefore be better to input via a BCI. Possible examples:
-
Read whole paragraphs at a time? Audible speech is linear. Visual text has a bandwidth that's limited by the fovea, which is pretty small, making it also somewhat linear. But there's no need to visually decode an HD image from the fovea just recover the few bits of information in a word. So one could pipe a fixed configuration of signals corresponding to an entire paragraph of text to somewhere (where?). The paragraph might then be perceivable "all at once" in some sense. Plausibly this wouldn't do much of anything, e.g. it might be the same experience as having memorized the paragraph, and to get anything useful from it you have to attend to it in sequential segments as usual. Also plausibly the visual cortex does useful stuff with written text input.
-
Spatially arranged data that takes values in more than 2 or 3 dimensions. Vision takes values in 3-ish dimensions, i.e. it's a map $\mathbb{R}^2 \to \mathbb{R}^3$. We don't natively get sensory input that looks like $\mathbb{R}^2 \to \mathbb{R}^k$ for some larger $k$, but since it would still be naturally laid out in a 2-dimensional space, it might be very readily usefully decoded by the visual cortex. This isn't clearly very useful, but could be cool; for example, you could have vision through a wider portion of the light spectrum and have better color discrimination. (However, this example might be encodable visually through multiplexing. Also, see "Seventh Sight", in Instantiation by Greg Egan.)
-
Many simultaneous audio input streams which, if they had to pass through your hearing hairs in your ears, would be mushed together. They could be inputted to your brain as physically separate channels, so you can parse them out from each other easily (by learning to attend to one neuron cluster and inhibit the others, which is the sort of thing brains can learn to do). It's not clear this would help with much; humans can already parse a lot of audio stuff out reasonably well.
-
Rich data such as the latent activations in a bottleneck of a very capable neural network. This may be difficult to usefully visually parse, but might be learnably-usable if piped directly in.
What are other kinds of inputs that might be very useful? Could any of them increase intelligence by multiple standard deviations?
§ Why direct piping might help
Why would it matter if you pipe rich data directly to neurons in the brain, rather than just displaying them visually and letting vision process the input?
Because a one-channel directly piped stream of input would have a fixed couple of neurons that it stimulates. In contrast, one could have a visual display with thousands of little squares, but it would be costly to constantly physically look at, and cognitively attend to, the particular squares that are relevant at the moment. The visual cortex would have to learn to dynamically pipe the relevant subset of info to somewhere that's using it, as the visual encoding of the useful input moves across the visual field. For high-bandwidth, non-spatially-structured data, a visual encoding would just look like static.
(Also, I don't know the details, but maybe the earlier in the visual stream you get, the more the brainware is hard-wiredly specialized to perform specifically visual computations. So it would be maybe more effective to pipe the non-spatially structured data to a deeper area, which you can't do with augmented reality.)
This also applies to the general idea of sensorily displaying data read from the brain (including non-electrically, e.g. Kernel), to show a person something about their own mental state: it might be cool on its own, but it would plausibly work much better if it was inputted neurally. Many such readouts might not be so useful because the brain could already have used them, but for example high-bandwidth readouts, or readouts that describe aggregate states (e.g. how much a whole area with millions of neurons is active) might be less easy for the brain to get natively.
True human-computer-human interfaces
What might be some true human-computer-human interfaces that significantly increase intelligence? Some possibilities:
§ ML prediction assist
Read from some neurons in a stream, and train a machine learning model to predict what that stream will say in .1, .5, 1, and 5 seconds, or something. Input those predictions into some neurons. One could make the neural net really really large, giving the human a compute assist, offloading things that are easy enough for the ML model to learn.
This might be very similar to increasing the power of the cerebellum.
§ Reservoir computing
Milan Cvitkovic told me this idea (which is usually in the context of machine learning: Wiki). It's similar to the previous idea, but instead of training a predictive model, you just run the neural signals through a wide variety of complex functions. If the ML prediction assist isn't helpful, maybe some of the other functions are helpful, and then the brain could learn to use those ones.
Questions
-
Would any of these ideas plausibly significantly increase intelligence? How much recorder and/or exciter bandwidth for how much intelligence?
-
What are other ways that computer-in-the-loop BCIs might be used to significantly increase intelligence?
-
Is there some way to use the idea of recorder- or exciter-electrodes that receive from or send to very many (say, millions of) neurons?