The mind-body problem is of relatively recent vintage in Western philosophy, but it has become of importance of late due to the role it plays in the "strong A.I." problem. Although the field of artificial intelligence is no where near to creating thinking machines, let alone conscious ones, the debate over whether the latter is indeed possible has been raging now for several decades. Sometimes philosophy raises and debates issues that have no immediate practical significance, and the possibility of "strong A.I." is currently one of these. But developments in A.I. may indeed make these discussion not as vacuous as they currently are, and so it may in some sense be helpful to analyze some of these arguments, with also the hope that they can shed light on the nature of intelligence and help those who are interested in the building of an artificial mind.
The author considers his book a blending of three ideas, namely functionalism, intensionality, and mental representation. He introduces these via a consideration of the arguments against Cartesian dualism that were being formulated in the early 1960's. The author labels "logical behaviorism" and "central state identity theory" as being two of the strategies for doing this. In logical behaviorism, mental processes are semantically equivalent to behavioral dispositions, and the definitions of these reduced to that of stimulus and response parameters, these parameters left essentially undefined. The author gives counterexamples to show that logical behaviorism falls short of being a theory of mental causation that allows nontrivial psychological theories to be constructed. Throughout the book, the author makes the requirement that a science of mind must define mental properties in a way that makes them natural from the standpoint of psychological theory construction. He makes the point, interestingly, that information processing systems can provide a natural domain for this kind of theory construction. He thus admits the possibility that these systems can share our psychology but not share our physical make-up. He sums this up by saying that "philosophical theories about the nature of mental properties carry empirical commitments about the appropriate domains for psychological generalizations". Physicalism does not meet these requirements he states.
The author thus asserts the need for a "relational" treatment of mental properties, and so he turns his attention to "functionalism". Along with stimulus/response, this theory also allows reference to other mental states. But functionalism is not a reductionist philosophy like behaviorism, for it admits mentalistic concepts, and these are relationally defined and causal. It thus allows psychological theory construction of the kind that a psychologist requires. However, the author is careful to note that functionalism must deal with two problems, one being the development of a vocabulary which specifies the allowed kinds of descriptions for causes and effects, the other being that one must gaurantee that functional individuation only takes place when there is a mechanism that can carry out the function and only where there is an idea of what such a mechanism is. One wants, in functionalism, to avoid "pseudo-explanations" like those arising in physicalism.
This is where the author brings in the (Turing) machines, via "machine functionalism", which he claims solve the above problems. Functional definitions of psychological kinds are identical to the ones used to specify the program states of the computer. The author then elaborates in detail on just how machine functionalism is able to cope with the problems discussed. The Turing machine can provide a sufficient condition for the mechanical realizability of a functional theory, and thus mental processes correspond to a certain Turing machine process, and for each Turing machine process a mechanical realization.
He is careful though to not let this theory do more than it should (or can), such as circular arguments that involve the postulating of processes for which no mechanical realization can exist. He then addresses the degree to which functionalism could be said to be a successful theory. Could one really accept that it is relational properties that induce pain rather than an itch? His argument involves the difference between "qualia inversion" and "propositional attitude inversion", the former possible, the latter not. He argues that it is not a conceptual possibility of one person's belief being different from another's despite the identity of their inferential roles. He does however give references for possible ways of avoiding this.
The author is firmly committed to having both a philosophical and psychological theory of propositional attitudes. His attitude here is an interesting one, for I think it is a sign of things to come in the intersection between science and philosophy. He states that the goal of cognitive psychology is to systematize and explain how the propositional attitudes of an organism are affected by experience, by genes, and other propositional attitudes that it has. The success of such a psychological theory puts constraints on the construction of the philosophical theory. This, again, is a most interesting move, for it is an example of a new way of doing philosophy, namely that of constructing philosophical theories that must respect scientific results. For the author, the distinction between a philosophical and a psychological theory is heuristic, namely it is a quick way of indicating which kinds of constraints are being used in the motivating of a given strategy in theory construction. This book is an example of this kind of strategy, and as a whole it is a fascinating one, particularly in the context of current research in artificial intelligence. When philosophers see the rise of thinking machines in the near future, their philosophical theories will have to adapt themselves to the abilities of these machines. And the machines themselves will have their own (unique) theories about their abilities.