Below is the reply I sent to Tech Review that appeared in September 2007.
What a thrill - just like fireworks on 4th of July. There, simultaneously in my mail box were two brilliant articles, both in their own separate ways contributing to the solution of one of the most important problems in science — the problem of consciousness and its relation to the brain and to current methods of artificial intelligence (AI).
Those articles were
1) Professor David Gelernter's persuasive essay on consciousness and the problems created by the lack thereof in AI systems — Artificial Intelligence Is Lost in the Woods in Technology Review July/August 2007 and
2) Professor Joe Tsien's tour de force experimental evidence in The Memory Code in Scientific American , July 2007 .
This fortuitous juxtaposition provides the grist for this letter.
First, I must side, along with David Gelernter, with the anti-cognitivists: AI software running on von Neumann machines, as presently constituted, will never be conscious, and without consciousness there can be no experience, human or otherwise. Believing that somehow consciousness will arise like a deus ex machina on your Pentium is strictly an article of religious faith.
To illustrate the importance of conscious experience, consider the knotty little problem of free will.
Free will, reduced to its essentials, means I can choose to do what I want. Examining how we humans do it illuminates what may be required of a machine.
Tonight I carefully considered writing this letter rather than going ballroom dancing, which I do most Friday nights. True to my heritage as a former MIT undergrad, I decided to stay home and write. You may question my judgment, but here's how I decided and thereby exercised my free will.
1. I thought about going dancing. I have detailed memories of this, since I do it frequently. As these memories are evoked, there are also associated emotions: E(dancing) - following Gelernter's notation. I would replace his single emotional bar code by a large set of associated multi-sensory evocations each with their cathected emotion - dancing with the ladies, listening to beautiful music, and chatting with friends, to label a few subsets.
2. I thought about writing this letter. Similarly, this evoked a set of associated multi-sensory images with accompanying emotional states - the satisfaction of preparing a detailed letter; the thrill of contributing to this dialog (a subject that I studied with almost religious zeal during one term in 1967 when I literally lived night and day in the MIT undergrad library); the memories of spirited public debates, similar to the Kurzweil-Gelernter debate, among MIT Professors Jerry Lettvin, Marvin Minsky, Seymour Papert, and their arch nemesis, philosopher Hubert Dreyfus of Cal, Berkeley, who, like his colleague John Searle, argued the anti-cognitivist position. 3. I weighed my potential satisfaction from each of these two possibilities and - lo and behold - alternative 1 triumphed.
As Gelernter states, primary experience is essential to the operation of the conscious mind. That is what it means to be conscious: to have experiences. Those experiences include not only emotions but the primary multi-sensory experiences themselves with their qualia. The entire experience is synonymous with neural activation. It bears emphasizing that we have no direct access whatsoever to the outside world. All we have access to ever is the world as represented by neural activation. The inside of the cranium is quite dark. When the excitability of those neurons is altered by drugs or anesthesia, the qualia are concurrently and precisely altered. Anesthesiologists verify this daily.
The cognitive scientists and neuroradiologists who look at functional MRI's tell us that the reactivation of these memories corresponds to activation of the same populations of neurons that were involved when they were initially activated in the primary sensory areas of the posterior cortex. Hence, the prevailing belief is that our stored memories utilize the same filtered models of reality that were represented by the neuronal populations that were originally evoked.
It is clearly illogical to assume that anything similar is happening in the Pentium chip that is processing this letter. In fact, we know precisely what is happening there with its register transfers and comparisons. No images there - not even filtered images. Consciousness in digital computers? Not happening. Never gonna happen.
However, Professor Joe Tsien's Scientific American article (and a huge related body of neurophysiological evidence) provides the solution. Primary experience is initially recorded, then replayed later using the large populations of neurons in the primary sensory cortices, which then converge in the hippocampus. The re-evocation of this population is the objectively verifiable manifestation of the subjective re-experience of a particular memory.
While David Gelernter rightfully relegates software running on digital computers to the realm of simulated unconscious intelligence, I believe that networks of artificial neurons have considerably more promise. Consider (and look at the web sites of) the machines being built by Kwabena Boahen's group at Stanford ( stanford.edu/group/brainsinsilicon/ ) or earlier by Carver Meade's student Misha Mahowald at Cal Tech (The Silicon Retina, Scientific American May, 1991). There are also hybrids in which the detailed properties of real neural circuits are emulated in VLSI - the work of Paul Rhodes group at Evolved Machines in Palo Alto(evolvedmachines.com) or the work of Theodore Berger's group at University of California, San Diego, on an artificial hippocampus (IEEE Engineering in Medicine and Biology, Sept/Oct 2005). These are machines which in real time exhibit the detailed functional properties of neural networks. Jeff (Palm Computing) Hawkins in his book On Intelligence and his work at Numenta.com also acknowledge the singular importance of neural networks.
Digital computers are so second millennium. As my MIT classmate Ray (Singularity is Near) Kurzweil might say, plug that silicon retina into your optic nerve, and you won' t know the difference. One of my dance partners, a superb ballerina with bilateral cochlear implants , certainly seems to be hearing something quite well.
Returning to free will - ok, let's say we've got a detailed model of experience in the form of a stored, filtered sequence of edges detected (or whatever) of a lady dancing. Who is the I that evaluates them to make the judgment to dance or to write a letter tonight?
The I is just a similar model only its of my body, including a lifetime of corporeal sensations and personal experiences with their associated emotions. Since early childhood each of us has learned the difference between self and not self (external world) and between self and other. That is our model of our bodies, our capabilities, out accomplishments, and so on. That model is the I or ego. The ego model also includes motoric experiences. As a child, I observe that I can wiggle my fingers, I can walk, I can get my parents’ attention; later, I can feed the dog, I can read a book; and still later, I can compare two experiences on the blackboard of consciousness, just as I can compare an apple and an orange in my hands. There is no reason that any of this could not be precisely modeled in silicon neurons.
As for emotions, surely these also correspond to patterns of neural activation. After all, there are lots of neurons in the limbic system; they must be doing something. Joseph LeDoux would agree ( The Emotional Brain , 2004). (When my stomach churns that information initially transmitted via the vagus nerve eventually modifies the patterns in the model of me discussed above. Similarly, I attribute my ability to activate plans in my premotor cortex (leading, via pyramidal cells, to voluntary motor action) to the 'I model.
The illusion of free will might be explained as follows, as it occurs in Gelernter's high-focus (serial, analytic) mode. The I model is seen to choose the simulation between dancing and writing that corresponds to the greatest pleasure or satisfaction: a) load E( simulation of dancing) into register A, b) load E(simulation of writing) into register B, c) compare A and B, d) execute dancing plan if A B, else go to writing plan.
In summary: 1) I completely agree with Professor Gelernter's call for the centrality of qualia, and his dismissal of digital computers as a potential platform for consciousness. 2) On the other hand, it seems quite conceivable that work being done in real-time, parallel, neural networks in silicon might at last give us artifacts for embodying qualia. 3) Networks so constituted might subserve models not only of experiences in primary visual, auditory, and somesthetic modes but also subserve models of the ego. 4) Such a machine might also, genuinely, believe that it can do what it wishes.
Robert L. Blum, MD, PhD (SB, MIT 1969)