Subscribe to our mailing list

* indicates required
Close

Tuesday, January 22, 2013

This is How Wrong Kurzweil Is

Yesterday I criticized Ray Kurzweil's prediction (made in a Discover article) of the arrival of sentient, fully conscious machine intelligences by 2029. I'd like to put more flesh on some of the ideas I talked about earlier.

Because of some of the criteria Kurzweil has set for sentient machines (e.g. that they have emotional systems indistinguishable from those of humans), I like to go ahead and assume that the kind of machine Kurzweil is talking about would have fears, inhibitions, hopes, dreams, beliefs, a sense of aesthetics, understanding (and opinions about) spiritual concepts, a subconscious "mind," and so on. Not just the ability to win at chess.

Microtubules appear to play a key role in long-term memory.
I call such a machine Homo-complete, meaning that the machine has not only computational capabilities but all the things that make the human mind human. I argued yesterday that this requires a developmental growth process starting in "infancy." A Homo-complete machine would not be recognizably Homo sapiens-like if it lacked a childhood, in other words. It would also need to have an understanding of concepts like gender identity and social responsibility that are, at root, socially constructed and depend on a complex history of interactions with friends, parents, relatives, teachers, role models (from real life, from TV, from the movies), etc.

A successful Homo-complete machine would have the same cognitive characteristics and unrealized potentials that humans have. It would have to have the ability not just to ideate, calculate, and create, but to worry, feel anxiety, have self-esteem issues, "forget things," be moody, misinterpret things in a characteristically human way, feel guilt, understand what jealousy and hatred are, and so on.

On top of all that, a Homo-complete machine would need to have a subconscious mind and the ability to develop mental illnesses and acquire sociopathic thought processes. Even if the machine is deliberately created as a preeminently "normal," fully self-actualized intelligence (in the Maslow-complete sense), it would still have to have the potential of becoming depressed, having intrusive thoughts, developing compulsivities, experiencing panic attacks, acquiring addictions (to electronic poker, perhaps!), and so on. Most of the afflictions described in the Diagnostic and Statistical Manual of Mental Disorders are emergent in nature. In other words, you're not born with them. Neither would a Kurzweil machine be borne with them; yet it could acquire them.

We're a long way from realizing any of this in silicon.

Kurzweil conveniently makes no mention of how the human brain would be modeled in a Homo-complete machine. One presumes that he views neurons as mini-electronic devices (like elements of an electrical circuit) with firing characteristics that, once adequately modeled mathematically, would account for all of the activities of a human brain under some kind of computer-science neural-network scheme. That's a peculiarly quaint outlook. Such a scheme would model the brain about as well as a blow-up doll models the human body.

Current mathematical models are impressive (see [3] below, for example), but they don't tell the whole story. It's also necessary to consider the following:

  • Neurotransmitter vesicle release is probabilistic and possibly non-computable.

  • Beck and Eccles [2] have suggested that quantum indeterminacy may be involved in consciousness.

  • It's likely that consciousness occurs primarily in dendritic-dendritic processing (about which little is known, except that it's vastly more complex than synapse-synapse processing) and that classical axonal neuron firing primarily supports more-or-less automatic, non-conscious activities [1][7].

  • Substantial recent work has shown the involvement of protein kinases in mediating memory. (See, for example [8] below.) To model this realistically, it would be necessary to have an in-depth understanding of the underlying enzyme kinetics.

  • To model the brain accurately would require modeling the production, uptake, reuptake, and metabolic breakdown of serotonin, dopamine, norepinephrine, glutamate, and other synaptic substances in a fully dynamic way, accounting for all possible interactions of these substances, in all relevant biochemical contexts. It would also require modeling sodium, potassium, and calcium ion channel dynamics to a high degree of accuracy. Add to that the effect of hormones on various parts of the brain. Also add intracellular phosphate metabolism. (Phosphates are key to the action of protein kinases, which, as mentioned before, are involved in memory.)

  • Recent work has established that microtubules are responsible not only for maintaining and regulating neuronal conformation, but in addition, they service ion channels and synaptic receptors, provide for neurotransmitter vesicle transport and release, and are involved in "second messenger" post-synaptic signaling. Moreover, they're believed to affect post-synaptic receptor activation. According to Hameroff and Penrose [5], it's possible (even likely) that microtubules directly facilitate computation, both classically and by quantum coherent superposition. See this remarkable blog post for details.

Kurzweil is undoubtedly correct to imply that we'll know a great deal more about brain function in 2029 than we do now, and in all likelihood we will indeed begin to see, by then, machines that convincingly replicate certain individual aspects or modalities of human brain activity. But to say that we will see, by 2029, the development of computers with true consciousness, plus emotions and all the other things that make the human brain human, is nonsense. We'll be lucky to see such a thing in less than several hundred years—if ever.


References

1. Alkon, D.L. 1989. Memory storage and neural systems. Scientific American 261(1):42-50.

2. Beck, F. and Eccles, J.C. 1992. Quantum aspects of brain activity and the role of consciousness. Proc. Natl. Acad. Sci. USA 89(23):11357-11361.

3. Buchholtz et al., Mathematical Model of an Identified Stomatogastric Ganglion Neuron, J. Neurophysiology, 67:2 February 1992.

4. Hameroff S 1996. Cytoplasmic gel states and ordered water: Possible roles in biological quantum coherence. Proceedings of the Second Advanced Water Symposium, Dallas, Texas, October 4-6, 1996. http://www.u.arizona.edu/~hameroff/water2.html

5.Hameroff, S.R., and Penrose, R., (1996a) Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness. In: Toward a Science of Consciousness, ­ The First Tucson Discussions and Debates, S.R. Hameroff, A. Kaszniak and A.C. Scott (eds.), MIT Press, 1996, Cambridge, MA. Also published in Mathematics and Computers in Simulation 40:453­480.

6. Toward a Science of Consciousness II: The 1996 Tucson Discussions and Debates, Stuart Hameroff, Alfred Kaszniak, and Alwyn Scott, Editors. MIT Press, Cambridge MA 1998.

7. Pribram, K.H. Brain and Perception Lawrence Erlbaum, New Jersey 1991.

8. Rovelli C, Smolin L 1995a. Discreteness of area and volume in quantum gravity. Nuclear Physics B 442:593-619.

9. Shema et al., Rapid Erasure of Long-Term Memory Associations in the Cortex by an Inhibitor of PKM, Science, 317:5840 pp. 951-953, August 2007.

15 comments:

  1. None of your claims of quantum wrt to brain function are supported by science. Hameroff and Penrose have zero credibility in this field.

    Furthermore, if you read the book by Kurzweil, he states pretty unambiguously that he believes not all the details are required for brain simulations to have complex properties that surprise out.

    ReplyDelete
  2. I think the real indeterminacy here is regarding whether this matters.

    As a functionalist, I believe that if a system can interact persistently and extensively with a biologically-seated mind in a way that is functionally indistinguishable from a biologically-seated mind, the system is a fully functional mind (the Turing test).

    Chomsky's recent comments on AI sent me off exploring Gallistel's claims (which Chomsky found to be worthy of favourable mention) that our neural network model of memory is utterly inadequate to explain the Scrub Jay caching phenomena.

    If our current synaptic model is wrong in this way, this would leave open Penrose-type indeterminacy possibilities, so I was interested, because I have never found Penrose's 'pessimism' (nor Dreyfus's, nor Searle's) have swayed me.

    Then I read Gualtiero's review of Gallistel's 'The Computational Brain':

    http://philosophyofbrains.com/2010/03/13/challenging-neuroscience-to-explain-cognition.aspx

    The home page of that community is worth checking out

    http://philosophyofbrains.com/

    ReplyDelete
  3. Anonymous11:43 AM

    You don't need a homo-complete machine (as you defined) to have a homo-superior machine - it's possible, but not required.
    Imagine an AI agent that really understands psychology, subconsciousness and emotions - say, that in a double-blind test it is above average psychoterapist, hostage negotiator and used car salesman. It does not require the AI agent itself to exhibit/implement those emotions.

    quite a few of the anxieties and ilnesses that you list as homo-complete can be considered as bugs in homo sapiens. We are already trying to fix these bugs in ourselves, by using medicine, psychotherapy, drugs and (in future) genetic changes - so if we make a new intelligence, we'd avoid these bugs.

    An intelligence can be homo-sapiens-equal (or superior) while being completely different.

    ReplyDelete
  4. Anonymous11:58 AM

    Many people mistakenly anthropomorphize AI. Sentience does not require emotion. Kurzweil did not make those claims in any of his books. The test you are alluding to is called the Turing test, and it is not from Ray Kurzweil. The Turing test merely requires that an AI with a text chat system could fool a human into thinking it was human.

    ReplyDelete
  5. Well, in case homo-completeness in the way you describe would be required, Kurtzweil's machines have to start collecting their childhood pretty soon. As a neuroscience-specialized psychologist and a full time IT professional, I see a system with good capabilities to *imitate* and *replicate* features *resembling* human intelligence just around the corner. A better Siri. A better Android "assistant". But a *proactive* intelligence outside pre-programmed algorithms...I have yet to see an inkling of that coming to light. Mostly it seems that the IT side does not understand human intelligence - the psychologist/neuroscientist side do not understand algorithms, and both are sure they have the other side covered. Hype stinking.

    ReplyDelete
  6. Anonymous12:45 PM

    Perhaps this is a case of "Fake it 'til you make it." I suspect that the sophistication of AI required to meaningfully interact with a human as an assistant or accomplice is far short of the real McCoy.

    It's a crude example, but look at how far real-time 3D rendering has progressed despite our reliance on inferior rasterization techniques. Simulating the real process via ray tracing is slow, but produces realistic results. The gap is closing to a point where few people care. Even if Kurtzweil might not be tuned in to this point, I'm sure Goolge's upper echelon is.

    ReplyDelete
  7. Anonymous1:45 PM

    There's no point in making a machine be exactly like a human. There are already 7 billion of them on the planet and there is a much easier mechanism (sex) that allows us to create more of them. As AI becomes more feasible, the AI systems may become conscious but there is no reason to add emotions to their programming. If humans only had the neocortex and not the limbic / animal brain then we would not be as emotional. AI will take the best of humans and the best of computers and robotics.

    ReplyDelete
  8. If Google is willing to fund Ray Kurzweil's efforts, I don't see the point of all this Zeno-ish negativity. The human brain is unquestionably a machine, though an amazingly complex one that may employ probabilistic mechanisms. Whether the substrate supporting consciousness can be identified and understood in our lifetime is a valid question, but if Google and Kurzweil are willing to try, I say "go for it!"

    ReplyDelete
  9. Anonymous5:49 PM

    "But a *proactive* intelligence outside pre-programmed algorithms...I have yet to see an inkling of that coming to light."
    Either you haven't seen the state of the field or what you describe as "proactive" is not actually possible. There have been very good work (especially at Google) toward anticipation of user action, and while the user doesn't see a lot of this, Google Now is an excellent (although basic) example.

    "Mostly it seems that the IT side does not understand human intelligence - the psychologist/neuroscientist side do not understand algorithms, and both are sure they have the other side covered."
    This is why the field of Cognitive Science exists; to connect both Computer Science and Psychology into a field that combines the parts of both that are most important toward creating a brain/mind. The subfield I'd look at (especially if, like myself, you actually believe that a computer mind could be created in our lifetimes) is Cognitive Modeling. Depending on the model used, most of the low end computations are brushed away, leaving a system that tends to be fairly accurate at predicting the actions people will make.

    ReplyDelete
    Replies
    1. Anonymous11:29 AM

      I have studied next to Cognitive Scientists. I am not impressed. They were people who did not understand psychology or algorithms, but instead had a VERY superficial skim of both. I saw them as worse than IT people building AI:s or cognitive psychologists modeling the brain as if it were a deterministic machine. They just sucked. Please give references to significant contributions to the field made by Cognitive Scientists.

      Delete
    2. Anonymous11:31 AM

      Also, yes, I read patents by Meyer and other Googleists. And I see Google in action.

      Delete
  10. "There have been very good work (especially at Google) toward anticipation of user action, and while the user doesn't see a lot of this, Google Now is an excellent (although basic) example." Hmm. Perhaps we should define what we mean by sentient, fully conscious machine intelligence (as per topic of the op), but for me, combining a users geographical location based on a person's mobile device's GPS info with calculations on numbers acquired through e.g. web-services does not a sentient machine nor "anticipation" make for me. "Anticipating user action"...I didn't see Google Now doing "anticipation" just yet for me, I am happy if it does that for you. Could you post some references to your findings on this, or literature on the same?

    ReplyDelete
  11. Your critique is based on the fact that the Kurzweil machine will not exhibit all properties of all human beings. This is flawed: no human being exhibits *all* properties of *all* human beings. Each human being is unique, with unique features. What makes us human is the intersection of all properties of all human beings (leaving aside some properties / some human beings?)

    So how to decide if the machine has human-like intelligence? The Touring test was suggested for that long ago, and that is what we will use. If you can not tell if the machine is human or not, it is human.

    Your argument that all human beings need experience to become human is also flawed: children are human, and have little experience. Besides, experience can be gathered in very different ways. The fact that all past human beings have used the (only) method available to them (interaction with the physical world) does not mean that it is the only possible method, in the same sense that the wheel is a very useful method of locomotion.

    New sensorial capabilities (highly parallelized), lots of available information, and raw processing power can go a long way to shorten the learning process. We can even put arbitrary "natural selection pressure", reduce mutation periods, increase reproduction rate, ... Running this in a virtual world, with increasing computing power, offers lots of possibilities. The time scale can be much shorted than the path we followed in the real world to reach our current state.

    As a secondary critique to your pessimist tone: building an intelligent machine does not mean building an artificial human. The universe is (probably) full of intelligent beings, very different to us. Even in Kurzweil pretends to build an artificial human, that is of little interest. Building an artificial intelligence is way more useful (except for the purely emphatic tasks).

    ReplyDelete
  12. It's quite interesting to read your summary of various neural phenomenon.

    However, I don't see where you are addressing anything that Kurzweil claims. Half of your article says that exact brain emulation has unexpected implications. The other half says that it is complicated. I suspect Kurzweil would be the first to agree with both of these claims.

    ReplyDelete
  13. Anonymous12:05 PM

    Anyone who makes assertions about the future 50 years from now is a fraud. Kurzweil cannot predict exactly what words/thoughts will pop into his mind twenty minutes into the future let alone predict what the entire world will be doing in 50 - 100 years. He can fantasize about what he would LIKE to happen, being an atheist it makes sense that he would want to be immortal, but that does not mean it WIL happen. Of course what does not makes sense is that he is an atheist with the opinion that a singularity could exist, where man becomes GOD (or indistinguishable from a GOD), and yet he will not accept that this technological singularity could have already happened with an alien race (thus making them GOD) and contradicting his atheistic beliefs.

    ReplyDelete

Add a comment.

Note: Only a member of this blog may post a comment.