The Chinese room argument is a refutation of strong artificial intelligence. "Strong AI" is defined as the view that an appropriately programmed digital computer with the right inputs and outputs, one that satisfies the Turing test, would necessarily have a mind. The idea of Strong AI is that the implemented program by itself is constitutive of having a mind. "Weak AI" is defined as the view that the computer plays the same role in studying cognition as it does in any other discipline. It is a useful device for simulating and therefore studying mental processes, but the programmed computer does not automatically guarantee the presence of mental states in the computer. Weak AI is not criticized by the Chinese room argument.
The argument proceeds by the following thought experiment. Imagine a native English speaker, let's say a man, who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols that are correct answers to the questions (the output). The program enables the person in the room to pass the Turing test for understanding Chinese, but he does not understand a word of Chinese.
The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese, then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.
The larger structure of the argument can be stated as a derivation from three premises.
Conclusion: Implemented programs are not constitutive of minds. Strong AI is false.
Why does the man in the Chinese room not understand Chinese even though he can pass the Turing test for understanding Chinese? The answer is that he has only the formal syntax of the program and not the actual mental content or semantic content that is associated with the words of a language when a speaker understands that language. You can see this by contrasting the man in the Chinese room with the same man answering questions put to him in his native English. In both cases he passes the Turing test, but from his point of view there is a big difference. He understands the English and not the Chinese. In the Chinese case he is acting as a digital computer. In the English case he is acting as a normal competent speaker of English. This shows that the Turing test fails to distinguish real mental capacities from simulations of those capacities. Simulation is not duplication, but the Turing test cannot detect the difference.
There have been a number of attempts to answer this argument, all of them, in the view of this author, unsuccessful. Perhaps the most common is the systems reply: "While the man in the Chinese room does not understand Chinese, he is not the whole system. He is but the central processing unit, a simple cog in the large mechanism that includes room, books, etc. It is the whole room, the whole system, that understands Chinese, not the man."
The answer to the systems reply is that the man has no way to get from the SYNTAX to the SEMANTICS, but neither does the whole room. The whole room also has no way of attaching any thought content or mental content to the formal symbols. You can see this by imagining that the man internalizes the whole room. He memorizes the rulebook and the data base, he does all the calculations in his head, and he works outdoors. All the same, neither the man nor any subsystem in him has any way of attaching any meaning to the formal symbols.
The Chinese room has been widely misunderstood as attempting to show a lot of things it does not show.
See also COMPUTATIONAL THEORY OF MIND; FUNCTIONALISM; INTENTIONALITY; MENTAL REPRESENTATION
Searle, J. R. (1980). Minds, brains and programs. Behavioral and Brain Sciences, vol. 3 (together with 27 peer commentaries and author's reply).