Monday, January 31, 2011

Chinese Room

Reference
Title: Minds, Brains, and Programs
Author: John R. Searle

Summary
This article is the origin of the Chinese Room concept and argument. In essence, the argument states that if an English speaking person can follow a set of rules to create a written reply to a query placed by a Chinese speaking person well enough that the Chinese speaker would falsely believe them to understand Chinese, the English speaker still does not understand the Chinese language. This claim is then extended to AI in order to make the statement that an AI is not truly intelligent simply because it can make a human-like reply to a query posed in our language. To summarize the argument in just a few words, Searle argues that Strong AI has no comprehension of semantics.

Discussion
This argument is one of the main counter arguments to the Turing Test. However, between the two I would tend to side with the Turing Test. At the most basic level, we must all believe that other people are cognitive, self-aware, and real. Without this we all fall into solipsism. By Searle's own argument, we have no more proof that other individuals are self-aware than we have that a Turing machine is self-aware, but if we cannot tell the difference then it only follows that we should extend the same courtesy to both. This applies to Searle's own Chinese Room argument as well.

In addition, and as a supplement to my previous argument, I feel that if an individual can successfully follow an instruction manual in the creation of a statement in a language that they do not speak, that individual can still be said to understand the statement in some sense. This requires a bit of an abstract perspective, but consider that the individual and the instruction manual together form a separate entity that can understand and reply to Chinese queries. The key point here is that the understanding is abstracted to a new entity of which both the manual and the individual form only a part. I feel this is a fair point to make, as Searle's argument encompasses the entire system in its premise, but the conclusion of the argument is focused on only part of the system. This conclusion is then paralleled to a different system, and used to conclude that the entire second system is the same as only part of the first. A more appropriate parallel to make would be that if a computer were preforming the same task, the CPU in the computer would have no understanding of the meaning of what it was translating. This says nothing however about the comprehension of the computer as a whole.


Picture courtesy of Google image search, from Jim Carnicelli's AI blog.

No comments:

Post a Comment