FREE Delivery in the UK.
Only 1 left in stock (more on the way).
Dispatched from and sold by Amazon.
Gift-wrap available.
Have one to sell?
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See this image

What is Thought? (Bradford Books) Hardcover – 20 Jan 2004

See all 2 formats and editions Hide other formats and editions
Amazon Price New from Used from
"Please retry"
£19.99 £11.00

Product details

More About the Author

Discover books, learn about writers, and more.

Product Description


A book that is admirable as much for its candor as its ambition... If What is Thought? can inspire a new generation of computer scientists to inquire anew about the nature of thought, it will be a valuable contribution indeed. -- Gary Marcus Science ... [Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science. -- Igor Aleksander Nature

About the Author

Eric B. Baum has held positions at the University of California at Berkeley, Caltech, MIT, Princeton, and the NEC Research Institute. He is currently developing algorithms based on Machine Learning and Bayesian Reasoning to found a hedge fund.

Inside This Book (Learn More)
Browse Sample Pages
Front Cover | Copyright | Table of Contents | Excerpt | Index
Search inside this book:

What Other Items Do Customers Buy After Viewing This Item?

Customer Reviews

There are no customer reviews yet on Amazon.co.uk.
5 star
4 star
3 star
2 star
1 star

Most Helpful Customer Reviews on Amazon.com (beta)

Amazon.com: 22 reviews
15 of 15 people found the following review helpful
On the nature of thought 3 April 2005
By Alwyn Scott - Published on Amazon.com
Format: Hardcover
In the introduction to this handsomely bound book, the author suggests that it is an appropriate time for an explanation of how the dynamics of a human brain can be accounted for by computer science. His title is motivated by Erwin Schrödinger's enormously influential "What is life?" which launched the field of evolutionary biology by inducing both Francis Crick and James Watson to successfully seek the molecular basis of biological evolution, but the analogy is strained for several reasons.

Schrödinger's book is less than 100 pages in a current edition, while Baum's is about five times as long. In the context of Schrödinger's lifelong interest in biological problems and based on a series of three public lectures that he presented to the Irish intelligentsia in 1943 (as one of his statutory duties as the founding director of the Dublin Institute of Advanced Studies), "What is Life?" is a classic example of his exceptional expository skill---in a second language, no less---whereas Baum's book would have profited from another round of copy-editing. But the most striking difference between these two titles lies in the cogency of their respective contents.

Although Max Delbrück and his colleagues had used measurements of mutation rates of fruit flies under X-radiation to show that their genes were necessarily of molecular dimensions in the mid-1930s, the implications of these data were unnoticed by the literate world of the mid-1940s. Thus Schrödinger's public lectures were newsworthy, being favorably noted by Time magazine in the spring of 1943, and his subsequent book---after some difficulties with an Irish publisher and the Roman Catholic Church over the religious implications of his ideas---went on to sell over 100,000 copies for Cambridge University Press, with translations into seven languages. Is there a similar communications gap in our current understanding of the nature of thought?

Noting his background in computer science, one mightclassify Eric Baum among those who believe that ``our souls are software'', but this is not quite fair. Although he states that ``the obvious inability of present-day computer science to account for [the brain's behavior] is no reason at all for doubting that they can be accounted for by computer science,'' the intellectual perspectives of "What is Thought?" are broader than this assertion seems to suggest. The book begins with several interesting chapters on the nature of computation (I particularly liked the presentation of the traveling-salesman problem), which include discussions of the importance of making decisions at the level of semantics, the Turing test, properties of neural nets, hill climbing in a fitness landscape, among several other relevant topics. These discussions lead into the author's central thesis that the mind, like all efficient computer programs, is necessarily modular. In other words, each aspect of the brain's dynamics comprises several subroutines, which presumably can be further broken down into hierarchical structures of nested activities, and he discusses several permutations of this important concept. Curiously, Baum's otherwise comprehensive list of references does not include Donald Hebb's seminal and classic work, in which the notion of ``cell assemblies'' (which are dynamically self-sufficient modules of neurons) was first suggested over a half-century ago. As a psychologist, Hebb aimed to ``bridge the long gap between the facts of psychology and those of neurology,'' and coming at about the same time as the development of the digital computer, his formulation has provided the basis for many numerical studies starting in the 1950s and continuing to the present day which are in accord with a growing body of electrophysiological data. Setting this quibble aside, Baum offers compelling psychological evidence for the modular structure of mind and provides his readers with an interesting and informative account of how the structure of our thinking may have developed over the course of biological evolution, with particular attention paid to computational constraints on the development of learning mechanisms. Importantly, his perspectives are broader than those of many of his colleagues, as he asserts that the ``whole program'' of a brain's dynamics includes the ``complex society'' in which it is embedded. Indeed, the author's evident humility in the face of awesome intricacy of mental activity is, to me, one of the more appealing aspects of "What is Thought?"

The often suggested possibilities for quantum computation are discussed in some detail, along with an analysis of the widely noted example of ``Schrödinger's cat'' which was originally proposed to emphasize the difficulties of applying ideas developed for atomic dynamics to complex macroscopic systems. Considering that a quantum computer---if it is at all possible to construct one---must be carefully isolated from structural irregularities and operated near absolute zero of temperature, Baum joins the majority of physical scientists in concluding that it is ``highly unlikely that quantum computation is relevant to the mind.''

Eric Baum has a dog, and---like most of us dog owners---he is convinced that his pet is conscious, but he goes on to assert that ``we do not need to posit new qualitative modes of thinking to explain human advance over animals. To my mind, the difference between human intelligence and animal intelligence is straightforwardly explainable by cumulative progress once there is the ability to communicate programs.'' Here, again, Baum could profit from reading Hebb's book, which contains but a single mathematical expression, namely A/S. This parameter represents the ratio of the associative area (A) of a mammalian neocortex to its sensory area (S), and it becomes greater as one progresses from rats through dogs to humans. A related physiological parameter---with profound significance for the ease and rate at which modules (or cell assemblies) can switch on and off---is the percentage of inhibitory intercortical neurons, varying as follows: rabbit (31%), cat (35%), monkey (45%), human (75%) [6]. Of course, these relative differences may be examples of the ``cumulative progress'' to which Baum refers.

In a penultimate section, Baum discusses the question of free will, noting that ``our decisions look, from any reasonable perspective short of knowing the exact state of our brains and simulating them in detail, like they are introducing genuinely new information.'' In reaching this conclusion, he may be confused by the continuing tendency of many scientists to overlook a phenomenon called ``sensitive dependence on initial conditions'' first studied by the eminent French mathematician Henri Poincaré and widely observed nowadays by those who study nonlinear dynamic phenomena (chaos theory). As Poincare` famously put it over a century ago:

"If we knew exactly the laws of nature and the situation of the universe at the initial moment, we could predict exactly the situation of that same universe at a succeeding moment, but even if it were the case that the natural laws had no longer any secret for us, we could still only know the initial situation approximately. If that enabled us to predict the succeeding situation with the same approximation, that is all we require, and we should say that the phenomenon had been predicted, that it is governed by laws. But it is not always so; it may happen that small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. Prediction becomes impossible, and we have the fortuitous phenomenon."

For an author who bases many of his conclusions on close mathematical reasoning and offers a theory that purports to be ``capable of explaining everything,'' the implications of these ``fortuitious phenomena'' should be carefully digested.

Alwyn Scott

23 of 25 people found the following review helpful
Interesting but replete with hasty argumentation 11 Oct 2005
By John Harpur - Published on Amazon.com
Format: Hardcover Verified Purchase
The main thesis of this book, asserted repetitively, is that the mind is a computer program. Once this is borne in mind, pardon the alliteration, most of the book is reduced to an argument in its favour, rather than an investigation into its credibility. The book often reaches for blunt assertions to support its positions and only afterwards begins a slight retracing of steps. For example, we are told that inductive bias and learning algorithms are coded into the genome. It is obvious, bit of speculation on DNA, evolution and algorithms and out comes the result!

In his observance of Occam's Razor, the author confuses the appeal of the simplest explanatory hypothesis with the belief that he has found such. The discussion of neural networks leaves aside recurrent networks, which are probably more biologically plausible than competitors.

Likewise the idea that the brain essentially 'runs' compressed programs due to evolutionary endowments is unconvincing and philosophically leaky.

I don't want to be over critical of the book as it has brought together many interesting strands of work, but it just has not woven them into anything interesting. There is little new here, whether from modularity or evolutionary programming constraints on neural activity. A lot of it is speculative and several of the key themes are discordant due to under analysis of their assumptions.

Several of the elaborations verge on the frivolous. For example, there is a particularly woolly argument linking the learning of Scheme to "what goes on in constructing our understanding of the world" (p. 222). Likewsie in discussing awareness and consciousness, the author relies on the use of 'main' in C to metaphorically explain how information might come together in the brain (p. 413-415). All kinds of reification fallacies come to mind, leaving aside the thinnes of the argument.

The bottom line is that the book pursues a strong cognitivist program (the brain is a computer) without convincingly examining various sides of the argument. I was certainly no wiser off at the end of it.
33 of 38 people found the following review helpful
Is Evolution The Secret To Intelligence? 25 April 2004
By strings - Published on Amazon.com
Format: Hardcover
Why can humans rapidly carry out tasks, such as learning to talk or recognizing an object, that seem intractable for computers?
According to Eric Baum, the human brain is much like a computer, but it runs programs that are different from the ones usually
written by human computer programmers. The programs run by the brain are insightful or ``compressed''; they have built in
a good deal of knowledge or ``understanding'' about the nature of the world. Human programmers have difficulty generating such efficient or compressed programs (except for limited special purposes), because to do so requires vast computing resources, far beyond what one can accomplish with pencil and paper or even with presently available computer assistance.
The key to understanding intelligence, according to Baum, is the theory of evolution; in the process that brought humans into being, evolution cycled through many billions of generations of organisms, in the course of which, in effect, vast computational resources were brought to bear on the problem of generating useful algorithms. The real secret to thought is thus stored in our DNA, which preprograms us with algorithms that
are more efficient and powerful than the ones usually available to computer scientists.
With this starting point, Baum proposes answers to many old riddles. Our sense of ``self'' reflects our origin in an evolutionary struggle for survival toward which all components of our biology are directed. ``Free will'' is a useful approximation because of the great complexity of our brains (and our limited knowledge about them) and the concommitant difficulty of predicting a person's behavior. Baum illustrates his arguments with numerous examples drawn from biology, psychology, and computer science; the material is generally quite interesting, though at times perhaps too detailed for a casual reader. His arguments are surprisingly persuasive, and, while certainly no expert, I suspect that Baum is closer to the mark than most of the old and new classic writers on these problems.
16 of 18 people found the following review helpful
A deep and brilliant book 2 Mar 2004
By "dlwaltz" - Published on Amazon.com
Format: Hardcover
Baum's book aims -- and in my estimation succeeds brilliantly -- at illuminating what we know and don't know about computation and the modeling of mind: memory, learning, perception, reasoning, etc. Baum summarizes the main perspectives of various schools of thought on the topic, notably including both proponents of the artificial intelligence enterprise as well as critics, plus neural, sociobiological, psychological and philosophical points of view. He summarizes the main results of computer science and shows their relevance to mind. Best of all, the book is very well-written, and despite the fact that it includes considerable technical depth, it does not presuppose prior knowledge of the subject and should therefore be accessible to a broad audience.
15 of 17 people found the following review helpful
Review of "What is Thought" 21 April 2004
By Franklin R. Amthor - Published on Amazon.com
Format: Hardcover
Eric Baum's recent book "What is Thought" is a must-read for anyone interested in artificial intelligence or cognitive science and neuroscience. In the highly saturated area of "consciousness books" this one stands out as one likely to be remembered and referenced much longer than the others. One reason for this is the absolute clarity with which he argues the hard AI position, that the mind is the result of the computer program that is not just run by the brain, but a result of the brain's very architecture produced by several billion years of evolution, the original and ultimate genetic program. The major thesis of the book is that "meaning" should be considered to be identical to a compact description of the data, such as the sensory input from the external world. One example he gives is the compact description of a set of data as falling on a line. This is, of course, a completely operational definition of semantics, but I think a useful one. This leads to the conclusion that meaning is intrinsically determined by the interaction of the world with the architecture of our 100 billion neuron brain as produced by the action of a mere 30,000 genes in generating its architecture. He does not ignore learning and culture, of course, but the point is that, at least at this point in our evolution, most of the compaction is already in the structure. Baum's credentials for many of these speculations come from his solutions to several classical AI problems, such as "Blocks World" using genetic programming techniques. The most successful of these are embodied in an artificial economy model call "Hayek" that solves the credit assignment problem well enough to have advanced solutions to such complex problems considerably. The description of the Hayek system is worth reading in its own right for those interested in various AI approaches to these classical problems, although I found these sections somewhat sparse in details for trying to implement the code. What Baum is very clear about is the formidable challenge of producing, in any current computer system, an equivalent compact description of data similar to that for which humans have evolved. Thus, from first principles, we cannot expect any current AI system to display anything like the ability to generate common sense meaning from the world that has been produced by the great genetic program that is the evolution of the human brain on earth, because the number of equivalent learning cycles (on the order of 4 billion years times the number of example animals) is so many orders of magnitude greater for biological brains than artificial ones. But there is hope in the future from Moore's law of the continued increase in computer power. If you accept these arguments about the vast computational power embodied in our brain's structure, then our inability to comprehend issues such as "qualia" and the feeling of having free will are to be attributed to simple ignorance, a quantitative difference, rather than to more mystical qualitative boundaries. This is consonant with arguments previously eloquently made by the philosopher Dennett, among others. Whether you are for or against such a hard AI position, this book makes its case more honestly, eloquently, and in more detail than any other I have read. Besides the lack of detail for implementation in the discussion of the Hayek system for solving classic AI problems such as Blocks World, one other complaint I have is the lack of reference to some previous work. For example, although Baum does not borrow in any direct way from the CopyCat work of Hofstadter and Mitchell, in spirit, at least, the set of autonomous agents in Hayek sound a lot like codelets and other elements in the CopyCat system, and I don't see why Baum could not have referenced that. I also believe that the reduction of data to a compact description as being equivalent to meaning is slightly incomplete. I think such a compact description is equivalent to an instinct, or an intuition. The embodiment of the compact description that can be manipulated within a system of such descriptions is what actually generates meaning, and the equivalent of thought.
Were these reviews helpful? Let us know