page 2  (15 pages)
to previous section1
3to next section

understanding

attacks on artificial intelligence, and as a major philosophical problem for everybody. The methodological criticisms for the project have already been adequately aired, but none of these really attack the logicist position as a descriptive project, merely as a causal account of common sense reasoning [8, 33].

This paper will present a position orthogonal to that of Hayes. Hayes' initiative clarified many of the issues associated with common sense, and other developments in comparative and developmental psychology have further highlighted the apparently fundamental nature of naive physics [6, 40], but they have also revealed a deeper and bigger problem than that of naive physics?naive psychology.

Naive psychology [7, 24] can be thought of as the natural human ability to infer and reason about other people's mental states?in short, to see one another as minds rather than bodies. This is an issue that artificial intelligence must also address. Although people see one another as minds not simply bodies, they just don't see computers as minds in the same way. If artificial intelligence is to overcome this barrier, we humans must be able to see minds in artifacts, to ascribe mental states to artificial intelligences in the same way that we do to people.

An example from Haugeland [22] will make this clearer. Imagine that somebody presents you with an object and tells you that it can play chess. How can they convince you that it really does? You can look at the behaviour of the system, but that might be open to different interpretations depending on how the system communicates its moves. You decide by looking at its behaviour, interpreting it in a consistent way, and checking whether the moves that it communicates are legal and sensible with respect to the rules of chess as you know them. Haugeland's point is that the meaning of the behaviour isn't an intrinsic property of the behaviour, but is ?attributed in an empirically justified interpretation? [22]. But this ability to make the interpretation is itself in part a property of you, the judge.

Now expand the problem to the whole of human interaction, which is similarly game-like. The `moves' can again be interpreted in many different ways, but if the moves are to tell a coherent story, they must be compatible with the laws and heuristics of other people's interpretation. Human interaction, like chess, isn't usually learned from books of rules, but by people learning from one another. It depends more on human agreement than adherence to sets of explicitly specified rules. Just as learning chess from watching only one player's moves would be almost impossible, the same is true of human social interaction. And any good chess program will depend on a player's of the moves, in terms of (often social) constructs such as `attacking' and `supporting.'

Following on from this game metaphor for human interaction, there is evidence that a lot of human intelligence is `Machiavellian' [4] in the way people use it to outwit each other and to recognise and manipulate one another's mental states; our social environments are considerably more complex than our physical ones. Survival in these social environments require us to become ?natural psychologists? [27], capable of recognising and reasoning about one another's mental states from their behaviour. Humphrey [27] even suggests that naive physics may, in part, be itself derived from a leaky naive psychology; he suggests that the transactional