- Chinese box
- (or Chinese room )A thought experiment introduced by the American philosopher J. R. Searle in ‘Minds, Brains, and Programs’ in the journal Behavioural and Brain Sciences (1980). We suppose that I am locked in a room and given a large batch of Chinese writing, although I know no Chinese. But I am also given, in English, instructions for returning particular batches of Chinese (‘answers’) in response to other batches (‘questions’). I do this by manipulating uninterpreted batches of Chinese script, recognized purely by its shape. Searle's claim is that ‘as far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements’. The point of the thought experiment is to suggest that in this example I have everything that artificial intelligence can put into me by way of a program. Yet it is obvious that I do not understand Chinese. The conclusion is intended to undermine the thesis of ‘strong AI’ that appropriately programmed computers have understanding, or cognitive states, or that their programs can help explain human cognition.The thought experiment has been heavily criticized, most forcibly on the grounds that it is the overall system (not just I myself, in the middle, shifting paper) that is appropriately compared to a programmed computer, but also on the grounds that the strong AI research program is entitled to develop ways of bringing symbols into further interaction both with the environment, and with behaviour of the machine, and that these together generate a better model of the cognitive subject. Searle's own response insists that anything characterized as a thinker must have appropriate causal powers, but also suggests, surprisingly, that such powers essentially require biology or ‘wetware’ rather than hardware.
Philosophy dictionary. Academic. 2011.