The Brain

Is it computable?

Nicolelis is in a camp that thinks that human consciousness (and if you believe in it, the soul) simply can’t be replicated in silicon. That’s because its most important features are the result of unpredictable, nonlinear interactions among billions of cells, Nicolelis says.

“You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

I’m personally an agnostic on the issue.

34 thoughts on “The Brain”

  1. “That’s because its most important features are the result of unpredictable, nonlinear interactions among billions of cells”

    I wonder. I suppose there are unpredictable nonlinear interactions among the [large number] of cells in the human pancreas, and at first glance, they might appear to pretty important for regulating pancreatic function, but the people working on artificial pancreases are not concerned. So, “is the human pancreas computable?” is one question and “if not, would it stop you from achieving your goals” is another.

    Look at the computer in front of you. We can’t model it with the kind of accuracy Nicolelis might be imagining for brains, down to the molecular level, but we can certainly model it well enough to run the same computer program on different computers.

    1. You don’t have to model a desktop computer at the molecular level because it is digital (actually, binary). The gate is either on or off and you don’t even need to model it any finer, that is, if it doesn’t fail in some way owing to a cosmic ray dislodging charge carriers or a latch getting into an indeterminate or free-oscillating state.

      Biological neurons indeed do have to be modelled at the molecular level because not only is there electrical signaling through synapses, new synapses can form, and organic molecules can also communicate information between neurons.

      1. I disagree. We know neurons talk to each other, but we don’t know whether a functional analysis of a neuron requires modeling at the molecular level.

        Again, consider an artificial pancreas – it can perform its pancreatic function without a hyper-accurate model of a biological pancreas. We know the language of the pancreas, we don’t know the language of the neuron.

        One more analogy: imagine you know absolutely nothing about computer science and computer engineering, but you’re a technically sophisticated guy otherwise. So, I give you a modern computer and an electron microscope and tell you to figure out how it works. You’re going to get hung up on modeling some aspects of the computer that truly don’t need to be modeled before (or unless) you re-invent computer science for yourself and start viewing the computer in functional terms.

        1. Sorry, I meant “language of the brain”, not “language of the neuron.” And really, I mean “the language of the mind, as instantiated on the brain’s hardware”.

        2. Neurons communicate both electrically (synapses, action potential) and chemically (neurotransmitter hormones, which are not restricted to the synaptic junction.

          1. Paul, computer circuitry gives off heat, but the heat has no functional significance for computation. We can describe what neurons do, but we don’t yet understand the functional significance of what we’re looking at.

          2. One cannot do operationally irreversible operations without increasing entropy and energy dissipation into the environment, so there is a thermodynamic lower limit on how little heat can be generated in processing.

        3. There’s a hell of a lot of space between “we don’t know” and “you can’t”.

    2. Its also ridiculous to believe that ‘unpredictable, nonlinear interactions’ cannot be simulated – unless you also believe that these unpredictable interactions are somehow governed by some invisible rules or divine force so that they are actually not unpredictable, but we cant observe the rules.

      Any simple Monte Carlo simulation deals with unpredictable nonlinear interactions all the time.

  2. There was once a time when it was thought impossible to sail around globe, to fly, and to travel in space. Just because it’s impossible now doesn’t mean it never will be. That said, it may not be possible in our lifetime and we’re not exactly producing the kind of people that beat impossible barriers anymore.

  3. He’s probably right when it comes to “downloading” a human brain onto silicon.

    What I find interesting is the idea of a silicon brain developing its own consciousness. There’s some studies done on FPGAs where circuits that were allowed to evolve used linear properties of the logic gates in mysterious ways to achieve goals with minimal connections.Some gates were found to be indispensable despite not being linked to their neighbors.

    However, I suspect that this would only happen in a brain that controlled its own body. You cannot have consciousness of self without a self to be conscious of.

    1. I remember that. The gates were not directly connected, but radio interference (which is normally not a factor) is theorized to have played a role. All those little wires inside the FPGA act as antennae under the right circumstances, and in that case the “disconnected” gates were transmitting and receiving each other.

  4. I’m agnostic on the issue.

    …and that’s ok, my computer feels the same way…
    Simple human bio-consciousness is over-rated.
    Those in the know don’t want to waste their time to prove it to us…
    🙂

  5. What is the difference between consciousness, and an automaton with programmed responses? Is there a difference? Innately, we think there is, but maybe we’re just programmed to do so.

    I can program a little robot bug to scurry away from light and hide in corners. I can make him kick up his legs in spasms associated with pain if I stimulate a sensor that transmits the message to do so to the CPU. Does it feel pain, though? Of course not. It does not feel. It only responds.

    What does that mean, to feel? I can program a humanoid robot to beg for its life not to be shut down and have its memory boards destroyed. But, does the robot feel terror at its prospective demise? No, it’s just acting in the manner it was programmed to.

    What is this bridge between automatic response and actual awareness? And, how is that tipping point reached?

    1. Have you looked at Daniel Dennett’s thoughts on these questions? He wrote a book, daringly titled “Consciousness Explained” which took a stab at an answer.

      1. Yes, I know a lot of people have written on this, going way back. If I had the time, I would research it, but there’s never time for anything it seems. Except writing quick little blog comments while waiting for a routine to run. But, I will mark it down for perusal when time permits. Thanks.

        1. Well, here’s something to think about while you’re waiting for the routine to run: Philosophers refer to the experience of feeling an awareness as “qualia” and it gets to the heart of what people mean when they say consciousness. So: put aside, just for a moment, the qualia of emotions and the qualia of using our five senses. Those are hard problems, and people who think about conscious robots get hung up on emotions and, particularly, sensations. Don’t ask whether your conscious robot feels terror, and don’t ask if your conscious robot feels pain. Instead, imagine an idealized Mr. Spock — no emotions. And imagine Spock floating in a sensory depravation tank, pumped full of pain killer. Lets say our sensory deprived emotionless Spock is conscious. He knows he is conscious. He has the feeling, the qualia, of being conscious. Now think about how we could design a robot to be like Spock. I think that feels more like a do-able exercise in moving information around, and having different computer processes monitoring each other, and getting into feedback loops. As you said, many smart people (including Daniel Dennet) have proposed various models of consciousness relying on a computational model of information transfer, and I think these models are more believable if you first imagine them as operating without sensations and emotions. Also, I’m sure you can start to imagine some of the elements of such systems on your own, and again, I think it is easier to get started if you think about consciousness separate from sensations and emotions.

          1. I seem to recall that experiments in sensory deprivation generally lead to hallucinatory experiences, where the brain tries to fill in a reality for the one that’s missing. It seems that stimuli create a negative feedback which stabilizes our thought processes. Without them, we are open loop unstable, and our minds crash.

            Maybe, that’s a key to consciousness – a runaway quest for information, which maintains all circuits at a perpetually busy status, acquiring and processing all stimuli, and leaping past them with intuition when they become unavailable. Or, something…

  6. Sir Roger Penrose is an advocate of this view, expressed in his books The Emperor’s New Mind and its sequels. Neal Stephenson took up Penrose’s position, combined with a version of Everett’s many-worlds hypothesis, and used it to great effect in his (IMHO best) novel Anathem.

    1. TENM is a handwaving hypothesis that consciousness exists as some quantum mechanical gobbledegook which has never been observed. Meanwhile the obvious signal-transformation properties of neurons are brushed aside.

  7. I wonder how much impact there is, if any, of spontaneous mutation from things like cosmic rays etc. If there’s an impact, then you cannot predict a synapse connection or firing because you don’t know if, when, or where a mutation will occur.

  8. Penrose points out that there is no room for uncomputable processes within known physics. So if Nicolelis is right, it requires new physics to explain. Then again, modern physics is known to be incomplete (even inconsistent), so we need new physics anyway. Penrose, who also thinks human consciousness is not fully computable, believes quantum gravity will hold the answer.

    1. “Uncomputable” is perhaps a hand-waving explanation of something with a deeper truth.

      There are non-linear differential equation solutions that are fractal in their scale meaning that their recreation with a digital (rather, binary) computer becomes intractable apart from making crude approximations. Lyapunov-exponent time constants, chaos, butterfly effect, (cough, climate models, cough) and all of that.

  9. We can start with the fact that brains and thought exist and brains are physical. They are a network using electrical and chemical signals. They are not binary, but a binary system can approach a non binary system to closer approximations by using more bits which are cheap.

    Computers are programmed with algorithms and are deterministic. True randomness may not exist (is the universe completely deterministic?)

    Free will would seem to depend on true randomness with a choice made between multiple paths. This could certainly be simulated by a computer (but requires a device that is not pseudo random.)

    We have no idea how those choices would be made or if it even matters… but if it doesn’t matter then is there actually free will?

    At this point we can simulate thought (better in time) but we can not predict if we will ever actually get there. Already however, some have been fooled (which means in limited cases the Turing test has been satisfied.)

    If you believe the bible it says working together we can achieve some of the same things as god who created thought in us (the story of Babel.)

    1. So you believe in climate models? That if you simply added a few more bits numeric precision to your calculations that (digital!) simulations of complicated, non-linear, fractal (i.e. “it’s turtles all the way down” in scale of turbulent flow) can be made accurate? That a climate model doesn’t need to be an accurate simulation of heat and fluid flows but merely needs to “get the gist” of them?

  10. The brain is massively parallel, fault tolerant to a degree (neurons get killed quite often), and built out of fairly crude components that are wildly interconnected. You don’t need to accurately simulate something that is itself inaccurate, you just need to make a rough equivalent model.

    Analog components with digital connection switching and memory storage might be a simpler way to recreate it.

    1. Which brings up the corollary that is equally interesting, not can we download a brain to a computer, but can we build a computer as efficient and reliable as the brain?

  11. Well I believe it is a problem of algorithms. If we can map the thought process of a brain with an algorithm or a series of algorithms that don’t needs gobs of processing power thrown at it to make it produce output in a reasonable amount of time then it doesn’t matter if the processor is made of meat or silicon. One area that I think is at the forefront of such AI algorithms is the research into GPS navigation. There is a BBC show on Netflix called, “The Secret Rules of Modern Living: Algorithms”, where I found out that shipping companies are paying big bucks to companies like Google to discover a better algorithm that can perfectly calculate the quickest route between two points with as little processing power and time needed. As good as a navigation program like ‘Waze’ is, anybody that is familiar with a certain stretch of road and it’s associated side streets can generally come up with a faster route most of the time. While we are only talking about maybe saving 5-10 minutes in most cases this adds up with you are a shipping company with 1000’s of trucks driving about delivering items and time is of the essence. If AI could better approximate the innate human reasoning that gives us the advantage of knowing how to do something as simple as efficiently going from point A to B then it could open possibilities to doing other mundane tasks.

  12. Bleh. Rand, I share your agnosticism. I think there’s too much hand-waving on all sides, however dressed up with intricate functional arguments about functioning we don’t understand well enough.

    OTOH, maybe I just can’t follow those arguments well enough… 😉

  13. I think it’s possible, if you don’t mind doing without the “soul” as I understand it — but the hardware architecture would need to support having the entire memory base loaded into RAM, and having all elements of that memory base able to interact with all others, for as long as it took to recompile the personality, and then keep all new processing products also maintained in RAM indefinitely.

    …in exactly the same order that the original’s memories formed and interacted throughout his lifetime up to when the memory record was made.

    …and accounting for processing errors during those original interactions, including effects of health issues, medication, alcohol…

    Now, how introspective was the original? If he “lived the life of the mind” you may be waiting more than just one lifetime for all of this to compile.

  14. Nature built consciousness and human-level intelligence, on this planet, with ordinary substances and ordinary amounts of energy. It just took a long time.

    To me, that means absolutely that humans can also build consciousness and human-level intelligence. It is merely (and I use the term with the humblest appreciation for those doing the work) a matter of time and effort.

  15. “Scientists say the most powerful computer in the world is almost as powerful as a bee’s brain” has been Sunday supplement fodder during slow news weeks going back at least thirty years. Every few years they reprint the same stories and only the names are changed. I can remember when the bee-brain comparison was being made to a Cray mainframe that a 80486-based PC of a decade later would put in the shade.

    When we figure out what consciousness *is* then perhaps we will be able to work on simulating it. In the meantime, even as our understanding of the physical hardware of our brains leaps chasm after chasm, even as our knowledge of cellular biochemistry increases exponentially year after year, still, the more closely our new tools allow us to look at the human brain, the more complexity we find.

    Maybe Moore’s Law will come to our aid in the end. Then again we are now building transistors out of lumps of matter only a few hundred atoms across. How much smaller and faster can the hardware get?

Comments are closed.