On Intelligence; the Chinese Room Argument

I'm a fan of the cognitive sciences. I enjoy reading books about neuroscience, AI, neural networks, and psychology. I read the books for the pure pleasure of expanding my understanding; stretching your mind (especially about minds) is very entertaining!

My latest excursion on this front was Jeff Hawkins's new book, On Intelligence. Hawkins is best known as the founder of PalmPilot, Handspring and the inventor of the Graffiti alphabet. But he's also a brain freak! Apparently, he's been reading the literature for most of his adult life. He runs a research facility called the Redwood Neurosciences Institute.

On Intelligence has a very ambitious aim: to provide the long-missing framework that explains intelligence in terms of neural organization and functioning.

Hawkins's theory is really a theory of how the neocortex works; the author largely dismisses the contributions of the 'primitive' brain to intelligence. The gist of the theory is that the neocortex implements a hierarchical memory-prediction machine; it tries to predict the future based on the patterns it has stored (memory). As sensory input flows up into the various layers of the cortex, the cortex tries to match the input to previously seen patterns. Matched patterns at lower levels create abstractions to be used by higher levels. The flow of information proceeds not just up, but down the hierarchy: when a pattern matches, lower levels are biased about what to expect next. Hawkins makes heavy use of the anatomical fact that there are more 'down' pathways than 'up' pathways to support this argument. Information is processed into higher and higher abstractions as the signals propagate upward, predictive information trickles downward as a result.

The definition of intelligence, according to Hawkins, is the ability to predict the future based on the world-model you have built in your neocortex. This contrasts to Turing's behavior-based definition of intelligence, as defined by the Turing Test. Hawkins notes, correctly I think, that intelligence can exist without behavior (as during navel-gazing sessions).

Though oversimplified (by his own admission), I found this to be the most tractable theory of cognition I have ever read. I whole-heartedly recommend the book for its mind-stretching description of our minds.

Now, onto a small criticism. At one point in his book, Hawkins gives a approving nod to John Searle for his famous 'Chinese Room' criticism of AI. For some background on this argument, see this Wikipedia entry. I disagree with Searle's arguments; and I was very surprised that Hawkins agreed, especially given his arguments throughout his book. The "Systems Reply", in my opinion, is the best counter argument. It says that the person in the room doesn't understand Chinese, but the combination of the book, the person and the reams of scratch paper, taken as a whole, understand Chinese.

Consider what it really takes to answer questions about a story: the ability to recognize not just words, but their meaning. And not just the meaning in a dictionary-lookup sense, but rather the ability to compare the narrative to your internal model of the world. The simple linear stream of words builds in the reader an enormous model, a predictive model, of the world in the story; a model based on the reader's own world model.

I suggest that the book needs to contain an entire memory-prediction hierarchy, an entire world model of some sophistication, in order to answer questions about the story. There can be no intelligent commentary on the story without the predictive richness of a mind. Searle misleads us into thinking that the book, the program, is a slim volume that only implements a translation algorithm. The book needs to be enormous, essentially capturing the representational richness of the neocortex. Perhaps the book would have to be a detailed map of the relevant neuronal structures -- like in "A Conversation with Einstein's Brain" in "Godel, Esher, Bach" by Hofstadter.

If the Chinese Room, as a whole, implements a (to use Hawkins's term) memory-prediction hierarchy, then it is capable of really understanding Chinese. Case closed.

Technorati tags for this post:

Comments on "On Intelligence; the Chinese Room Argument":

Comment by jeremydt@gmail.com

Posted at Mar 21st 2005.
I don't agree with your interpretation of the Chinese room problem. True, the book would be enormous if it were to answer all the questions of a story, but it must simply be a large book written by an intelligent human who does understand Chinese, and attempts to predict every possible question. In this case, perhaps it might not answer EVERY possible question but maybe it would answer ALL questions posed to it by 99.9% of the people who ask it - so this 99% would think the 'room' was intelligent. And even though the book was written by an intelligent being (just as a computer is programmed by one), the book, room and hapless human inside do not understand Chinese

Comment by mk

Posted at Mar 22nd 2005.
Jeremy, you seem to be suggesting the book would be composed of an enormous look-up table of Chinese-story to English-story translations. If that's the case, I agree, the system wouldn't really understand Chinese. But a book written in that matter would be so impossibly big that it couldn't ever be written. The number of "possible stories in Chinese" is a number way beyond infinite ;-) Real understanding comes from relating inputs from the outside world to patterns stored in your brain -- and letting those patterns, and their never-ending interactions, make predictions about future inputs. The book, as I suggest it must be written, can't really be written or programmed as such. It has to be raised - like a child! It will develop its own world model as it siezes on patterns in the input. Like I suggested in my post, the book might even have to be a mapping of neural structures. I do feel pretty bad for the human being tediously tracing and updating action potentials across a paper brain, however. ;-)
Allowed html tags: br p blockquote i b em code strong u ol ul li a
(type "HuMaN" here)