Mind as Machine: A History of Cognitive Science Two-Volume Set Hardcover – 5 Sep 2000
- Choose from over 13,000 locations across the UK
- Prime members get unlimited deliveries at no additional cost
- Find your preferred location and add it to your address book
- Dispatch to this address when you check out
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Mind as Machine is instructive, thought-provocative, and interdisciplinary in its approach...One cannot deny the fact that Professor Margaret Boden is an excellent writer (Yves Laberge Journal of Chemical Neuroanatomy)
Anyone will profit from the clarity in context that Boden provides. Her impressive learning is evident at every turn, everything is deeply understood and thought about, and almost everything important seems to have been read and incorporated, down to very recent and still forthcoming literature - this is not a history of things past but an overall account of the discipline as it stands now. (Vincent C. Muller, Minds and Machines)
the writing is clear and engaging throughout, so much so that it often sounds less like scientific prose than literature. (Roy Behrens, Leonardo OnLine)
a triumphant literary event among histories of cognitive science (Igor Aleksander, Journal of Consciousness Studies)
a monumental new history of cognitive science ... scholarly, readable, and even entertaining ... an invaluable resource ... At present it has no rival, and it is hard to imagine any other work that could so completely document the intellectual ferment of the past fifty years (Michael C. Corballis. Times Literary Supplement)
These volumes are thought provoking and open a doorway towards improved understanding of the patterns of science in the second half of the twentieth century. (Stephen T. Casper, Medical History)
What Other Items Do Customers Buy After Viewing This Item?
Most Helpful Customer Reviews on Amazon.com (beta)
With few exceptions, the second volume of "Mind As Machine" respects the high quality of the first, and can be read and understood by anyone with a general background in artificial intelligence, neuroscience, or psychology. At times the author seems intimidated by some of the mathematical developments that have taken place in cognitive science, such as the use of the theory of dynamical systems, but in general she confronts issues with confidence and keen insight.
The second volume begins with a very provocative question, one that has plagued the field of artificial intelligence since its beginnings in the 1950's. The author asks, "when is a program not a program", and this is definitely a question whose typical answer is responsible for much of the lack of confidence in progress in artificial intelligence. The general prejudice in the cognitive science community is that a thought process or reasoning pattern cannot be viewed as a "program" because the latter is only to be taken as a collection of "instructions" that is to be run on a computer and will give the same answer when acting on the same information presented to it. The problem was then to distinguish between a "program" and an "intelligent" program, even though the designation of "intelligent" was (and still is to a great extent) extremely vague. The lack of a precise definition of intelligence would naturally lead to controversy regarding the identity of the first "intelligent" program, and the author discusses some of this controversy. The `Logic Theorist' program and the `Selfride-Dinneen' program are discussed as early candidates for being the first "AI programs" but the author points to some of the early skepticism as to their status as being intelligent. One of these continues to this day, namely that an "intelligent" program has to be highly complex, or "sexy" to use the author's terminology. But complexity, if viewed from the standpoint of the history of artificial intelligence, is in the eye of the beholder, and programs once deemed intelligent, like the ones behind checkers and chess, are now viewed as mere "programs." Such trivialization of intelligent "programs" could be avoided if they were designated, in full recognition of their status in cognitive science as "reasoning patterns" or "thought processes" rather than programs or "algorithms", with the latter names being more appropriate for a computer science context.
Many more of these peculiarities in the history of AI/cognitive science are discussed in Volume 2, such as the deliberate "underselling" of an intelligent technology so as not to instill fear into prospective customers who feel threatened by intelligent machines. As someone who has worked in the trenches of "technological AI" (as the author calls it in the book), this reviewer can report many such stories by vendors who do not want to "frighten" potential customers away by designating their product as "intelligent". Instead they usually play it down, just as author reports the salesman of the IBM 704 did back in 1979. Even highly educated customers, very familiar with modern technology, can view intelligent machines as disquieting, or even threatening, and are frequently hesitant to deploy them in a production environment. The Hollywood Skynet meme has diffused quickly and effectively throughout the world, stymieing the practical application of artificial intelligence. And the bar keeps getting raised for judging whether a machine is indeed intelligent. Chess used to be the holy grail, but now chess "programs" highly competitive with human players can be purchased cheaply at stores found in most neighborhoods throughout the country. Even the developers and researchers themselves, including the author, have dismissed progress as being either "trivial" or part of "technological AI", and predict "real" intelligent machines to be a few hundred years away.
The author also discusses in some detail the petty squabbles between research groups in cognitive science and artificial intelligence. Some readers may feel that this kind of discussion should be left out of the book. To omit it though would be a mistake, since the book is an historical account and readers should have an understanding of the degree to which even individuals deemed to be highly intelligent can engage in conduct that borders on triviality or blatant irrationality. Such dialog and behavior is sometimes intermixed with brilliant developments, proving indeed that good work can be done even if one is embedded in a contentious, degrading atmosphere.
A particularly valuable part of Volume 2 is the author's discussions on research into systems for the representation of semantic information. Sometimes called `semantic networks' at the present time, the goal is to be able to represent semantic content for many different domains or subject areas. Semantic networks would allow a machine to reason across these domains without any external intervention or tuning. The author discusses the work of M.R. Quillian in the early 1960's on semantic representations, which used what was called `localist connectionism' at the time. Work on the `semantic Web' is a good example of current research into semantic representations. The ability of a machine to deal with many knowledge domains is also of great interest to those researchers who are currently attempting to design machines with artificial "general" intelligence (GAI). A modest (but impressive) hint of how this could be done is given by the HACKER "program" which the author discusses in this volume. HACKER could engage in "self-criticism", and this ability is taken to be a sign of what the author has labeled as `Piagetian error-led constructive learning'. Enthusiasts for GAI have pointed to the need for this type of learning. The current enthusiasm for GAI was also taking place in the 1960's, as the author reports in this volume, but under pressure from "experts" was abandoned in the 1970's.
There are some annoying parts of this volume, but these are few. One is the author's continued reference to the "general public", apparently to distinguish them from members of academia or research labs, the latter two groups being the only ones qualified it seems to assess progress in cognitive science. Another is the inclusion of philosophical debate on artificial intelligence, with an entire chapter devoted to it. Certainly such an inclusion would be deemed appropriate since this volume is an historical overview, but such debate has only slowed the progress of artificial intelligence. It is the opinion of this reviewer that all who are involved in this kind of research should declare a moratorium on philosophical debate and get on with the design and construction of intelligent machines. The philosophers should be left alone to construct the gigantic, rhetorical conceptual spaces they usually get lost in. And lastly, the author seems to restrain any enthusiasm she has for the subject, with the belief that such enthusiasm comes from only those who want to advertise themselves or who do not have the appropriate background to understand the subject. Certainly the press has exaggerated some of the claims of progress in artificial intelligence, but on the other hand real progress has been made, and great enthusiasm should be expressed for this progress. Sadly, many academics seem to be too guarded and self-restrained to participate in such joyous emotions, judging it to be "unprofessional" to do so. There are exceptions to this though in the book, such as the author's refreshingly unbridled enthusiasm for the SHRDLU "program" of T. Winograd.
As the author details in this volume, and as can be gathered from conversations with specialists, AI has been subjected to harsh criticism, some of this justified but most frequently not. One of these criticisms was leveled by Drew McDermott, and is outlined by the author in this volume. McDermott's criticism goes at the heart of many of the problems in the acceptance of machines as exhibiting intelligence. The issue is the words that are used to describe processes that are occurring in machines. McDermott charges AI researchers with "self-deception" when they use "wishful mnemonics" to describe what he says are just "procedures" or "data structures." The author gives a few examples, one being when a procedure is called GOAL instead of something like G0034, the former name leading one to believe that a `real' goal has been achieved. McDermott does not want to think outside of the computer science paradigm, and as long as AI researchers listen to his admonitions and stay within this paradigm, they will never accept machines as being intelligent, no matter what the capabilities of these machines. Every process occurring in these machines will always be viewed as a procedure, and every knowledge or semantic representation will be viewed as a data structure. Machines will be thought of as entities that run programs, with these programs mere manipulations of data structures, even if the machine can beat every human at chess or backgammon, even if it can produce and prove original theorems in pure mathematics, or even if it can self-navigate on Mars and evaluate its surroundings with scientific curiosity.
But the views of McDermott are narrow and myopic, and can easily be stood on their head. One could for example speak of "accurate mnemonics" to describe what is going on in machines when they engage for example in learning or discovery. There is no reason why AI researchers should not call a "program" intelligent if it is indeed the case that it is. The issue is what kind of processes in a machine we should label as intelligent, and when we decide to do so it will be based on an understanding of learning and intelligence, and not on a rigid and unproductive adherence to the computer science paradigm, as McDermott insists upon. It would be proper for example to call a procedure or algorithm a `reasoning pattern', or hardware a `cognitive structure', or even memory (volatile or not) as a `knowledge base.' When this is done, one can indeed distinguish intelligent machines from non-intelligent ones, and it becomes natural to refer to "machines that can think." There is no doubt that some machines throughout the history of AI have been deemed "intelligent" when they indeed were not. But there are more that have been viewed as non-intelligent when they were (and are), if viewed from a reasonable framework. Many more will make their appearance in years to come, illustrating with precision the strict equality between mind and machine.
Review of Volume 1:
A detailed book on the history of a subject in science or technology is always helpful, since it provides insight that is usually not obtainable from formal papers and monographs. The latter are written for experts, and so no attempt is made to explain the subject matter in a way that is transparent to a reader that is outside the field. In addition, these works are usually guarded, meaning that the authors are being very careful not to explain themselves too well, and thus make a potential critic's job much easier. The author of this massive two-volume set has given the reader a history of the subject from the standpoint of an insider and recognized contributor to the field. For experts in cognitive science, much of it will be familiar, and no doubt controversial, as they may feel some of their ideas have been misrepresented. For non-experts (such as this reviewer), there is no way to tell if the volumes really respect the details of what has happened, and therefore such readers must view its contents as more tentative than usual. The author though is careful to note very early on that the volumes represent her point of view on cognitive science "as a whole", and this serves to put the skeptical reader more at ease.
The "man as machine" paradigm is traced back to the ancient Greeks in Volume 1, but the author cautions that their attitudes about this are much different (and its fair to say much less ambitious) than those held today. Since World War II, the belief has been that not only is it proper to view humans as machines, but that it is possible, however challenging, to construct non-human entities or "machines" that have minds. The repugnance of the ancient Greeks to practical work would have discouraged any attempts to build such machines. The author outlines various other attempts to build "automata" after the time of the Greeks, one of them a rudimentary android that was constructed in the twelfth century, another a "talking head" that was, interestingly, destroyed by none other than Thomas Aquinas, who apparently believed it to be "devilish" in origin. Rene Descartes of course is the most virile of the agents of the "man as machine" meme, and the author naturally devotes much space in this volume to his contributions in this regard.
The Cartesian view has had a large following, and there were sometimes horrifying consequences of this. The author includes an example of this: the belief that animals cannot feel pain and with animals being dissected while still alive. And the Descartes view of consciousness, briefly discussed in this volume, is finally being scrutinized scientifically using brain scanning techniques. It was not until very recently that the scientific community has viewed consciousness as a subject worthy of investigation, and the study of consciousness, particularly from the standpoint of cognitive neuroscience, will certainly shed light on Descartes dualism, with its arbitrary division of mind from matter. The artificial intelligence community has also been stymied by a difficult problem that is brought out in the book by the inclusion of a passage from Descartes. This problem revolves around the construction of a machine that can think/reason in any domain and not merely be a collection of modules each of which is designed for a specific domain. Descartes thought this to be impossible, as the included passage clearly indicates. Descartes thought reason to be a "universal tool" that can work in "all kinds of circumstances", whereas man-made implements or machines "need a special arrangement for each special action." Artificial general intelligence or AGI as it is now called, has as its goal the construction of such a "universal tool."
The author's view on the role of Charles Babbage in the mind-as-machine paradigm goes against the widely held view because she believes that his role was irrelevant. She devotes many pages to the support of her view, and her arguments are convincing to a large degree. The most interesting part of her discussion though is that she compares the hype associated to Babbage with the "techno-hype" she imputs to the artificial intelligence community in the 1970s and late 1980s. She believes this over-selling of artificial intelligence was counterproductive for this type of research but there is another possible interpretation of this behavior, namely that it was a way that the researchers produced confidence in themselves to tackle the challenging problems of AI. The sheer magnitude of the difficulty of these problems requires individuals with extreme confidence, without of course engaging in confabulation. In relation to this discussion, and giving much more insight into attitudes about AI, both from the "public" and the AI researchers themselves, is the passage due to Lady Lovelace on the 'Analytical Engine' of Babbage. In this passage, Lovelace warns against any possible exaggerations regarding the powers of the Analytical Engine, and remarks that when considering a subject that is novel, there is a temptation to exaggerate what is already interesting and to undervalue what is really true after it becomes known what is really possible or known. The history of AI has been plagued with this rollercoaster ride of confidence and undervaluation, and this has been pointed out by other AI researchers/historians such as Donald Michie and Pamela McCorduck. This pattern of initial enthusiasm and hype surrounding an advance in AI, followed eventually by its understanding and then its eventual rejection as anything significant, could be called the 'Lovelace-Michie-McCorduck effect' in recognition of the three individuals who wrote of it.
It is also very interesting to compare what is known and can be accomplished now with what one reads in this volume as done or accomplished in the last four hundred years. One example are the Vaucanson automatic musicians of the eighteenth century as compared to the machine musicians of today, the latter of which can not only compose original music but also serve as neuroscientific models of musical appreciation. Another concerns the early skepticism regarding the possibility of constructing robots that simulate various human bodily movements. This skepticism should be compared to the artificial muscle technology of today. Still another is the work of Ramon y Cajal on "neurones" as compared with what is done in computational neuroscience today.
In discussions and debates on the mind-as-machine paradigm and artificial intelligence, one usually encounters statements regarding the power of "intuition" over computational or logical reasoning patterns, or at least beliefs, usually strongly expressed, that the human mind has the ability to process information that cannot be viewed as computational. It would of course be surprising that a book on the history of cognitive science and the 'mind-as-machine' paradigm would not include a discussion of these debates. The author includes such a discussion, with emphasis on the work of Alan Turing, wherein she includes an interesting passage that indicates that Turing himself believed that intuition cannot entirely be avoided. Interestingly though, Turing pointed to the need for "non-constructive" systems of logic that allow one to differentiate between when a step in a proof is the result of intuition and when it is purely formal. And the 'O-machines' of Turing are entities that have mathematical "powers" that are not based on Turing computation. The belief that intuition is not only necessary but more powerful than computational processes is still very entrenched not only in the scientific community but outside of it. It would seem that the majority of humans cannot believe that mental processes are solely mechanical/computational, but no explicit tests illustrating the "power" of intuition over computation have been conducted to date. Intuition thus remains a concept with no scientific foundation as of yet.
Despite the huge size, coverage is limited to North America and the UK. Cognitive science in Europe and Asia are neglected.
In addition, the huge size makes for a user unfriendly document. There are many cross-references to other sections of the text that may well be in the other volume, and all of the references are at the end of the second volume. The cross-references are typically imprecise, to a large section, and it is not always obvious what was meant.
I regret having spent the money to buy this work. The best I can say for it is that it should be in university libraries as a research resource for those who might write better, more useful histories of cognitive science. But those authors had better do their own fact checking.
Look for similar items by category
- Books > Computing & Internet > Computer Science > Artificial Intelligence
- Books > Computing & Internet > Programming > Algorithms
- Books > Health, Family & Lifestyle > Psychology & Psychiatry > Cognition & Cognitive Psychology
- Books > Health, Family & Lifestyle > Psychology & Psychiatry > Specific Topics
- Books > History > Other Historical Subjects > History of Science
- Books > Reference > Language
- Books > Science & Nature
- Books > Society, Politics & Philosophy > Philosophy
- Books > Society, Politics & Philosophy > Social Sciences > Linguistics > Reference