FREE Delivery in the UK.
Only 1 left in stock (more on the way).
Dispatched from and sold by Amazon. Gift-wrap available.
What is Thought? (Bradfor... has been added to your Basket
+ £2.80 UK delivery
Used: Like New | Details
Sold by LABYRINTH BOOKS
Condition: Used: Like New
Comment: All inventory in stock! Orders generally ship in 2 business days Books listed as Like New may have a small publishers mark
Have one to sell?
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See this image

What is Thought? (Bradford Books) Paperback – 3 Feb 2006

4 out of 5 stars
5 star
14
4 star
3
3 star
2
2 star
3
1 star
2
4 out of 5 stars 24 reviews from Amazon.com us-flag |

See all formats and editions Hide other formats and editions
Amazon Price
New from Used from
Paperback
"Please retry"
£31.95
£29.14 £12.00
Note: This item is eligible for click and collect. Details
Pick up your parcel at a time and place that suits you.
  • Choose from over 13,000 locations across the UK
  • Prime members get unlimited deliveries at no additional cost
How to order to an Amazon Pickup Location?
  1. Find your preferred location and add it to your address book
  2. Dispatch to this address when you check out
Learn more

Top Deals in Books
See the latest top deals in Books. Shop now
£31.95 FREE Delivery in the UK. Only 1 left in stock (more on the way). Dispatched from and sold by Amazon. Gift-wrap available.
click to open popover

Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.

  • Apple
  • Android
  • Windows Phone

To get the free app, enter your mobile phone number.



Top Deals in Books
See the latest top deals in Books. Shop now

Product details

Product Description

Review

..."[Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science." -- Igor Aleksander, Nature

..."ÝShould¨ engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science."-- Igor Aleksander, Nature

--Gilbert Harman, Department of Philosophy, Princeton University

--David Waltz, Director, Center for Computational Learning Systems, Columbia University

--Nathan Myhrvold, Managing Director, Intellectual Ventures, and former Chief Technology Officer, Microsoft

--Philip W. Anderson, Joseph Henry Professor of Physics, Princeton University, 1977 Nobel Laureate in Physics

" ...[Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science." -- Igor Aleksander, Nature

" A book that is admirable as much for its candor as its ambition.... If "What is Thought?" can inspire a new generation of computer scientists to inquire anew about the nature of thought, it will be a valuable contribution indeed." -- Gary Marcus, Science

" What's great about this book is the detailed way in which Baum shows the explanatory power of a few ideas, such as compression of information, the mind and DNA as computer programs, and various concepts in computer science and learning theory such as simplicity, recursion, and position evaluation. "What Is Thought?" is a terrific book, and I hope it gets the wide readership it deserves." --Gilbert Harman, Department of Philosophy, Princeton University

" There is no problem more important, or more daunting, than discovering the structure and processes behind human thought. "What Is Thought?" is an important step towards finding the answer. A concise summary of the progress and pitfalls to date gives the reader the context necessary to appreciate Baum's important insights into the nature of cognition." --Nathan Myhrvold, Managing Director, Intellectual Ventures, and former Chief Technology Officer, Microsoft

" Eric Baum's book is a remarkable achievement. He presents a novel thesis -- that the mind is a program whose components are semantically meaningful modules -- and explores it with a rich array of evidence drawn from a variety of fields. Baum's argument depends on much of the intellectual core of computer science, and as a result the book can also serve as a short course in computer science for non-specialists. To top it off, "What Is Thought?" is beautifully written and will be at least as clear and accessible to the intelligent lay public as "Scientific American,"" --David Waltz, Director, Center for Computational Learning Systems, Columbia University

" This book is the deepest, and at the same time the most commonsensical, approach to the problem of mind and thought that I have read. The approach is from the point of view of computer science, yet Baum has no illusions about the progress which has been made within that field. He presents the many technical advances which have been made -- the book will be enormously useful for this aspect alone -- but refuses to play down their glaring inadequacies. He also presents a road map for getting further and makes the case that many of the apparently 'deep' philosophical problems such as free will may simply evaporate when one gets closer to real understanding." --Philip W. Anderson, Joseph Henry Professor of Physics, Princeton University, 1977 Nobel Laureate in Physics

& quot; ...[Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science.& quot; -- Igor Aleksander, Nature

& quot; A book that is admirable as much for its candor as its ambition.... If What is Thought? can inspire a new generation of computer scientists to inquire anew about the nature of thought, it will be a valuable contribution indeed.& quot; -- Gary Marcus, Science

& quot; What's great about this book is the detailed way in which Baum shows the explanatory power of a few ideas, such as compression of information, the mind and DNA as computer programs, and various concepts in computer science and learning theory such as simplicity, recursion, and position evaluation. What Is Thought? is a terrific book, and I hope it gets the wide readership it deserves.& quot; -- Gilbert Harman, Department of Philosophy, Princeton University

& quot; There is no problem more important, or more daunting, than discovering the structure and processes behind human thought. What Is Thought? is an important step towards finding the answer. A concise summary of the progress and pitfalls to date gives the reader the context necessary to appreciate Baum's important insights into the nature of cognition.& quot; -- Nathan Myhrvold, Managing Director, Intellectual Ventures, and former Chief Technology Officer, Microsoft

& quot; Eric Baum's book is a remarkable achievement. He presents a novel thesis -- that the mind is a program whose components are semantically meaningful modules -- and explores it with a rich array of evidence drawn from a variety of fields. Baum's argument depends on much of the intellectual core of computer science, and as a result the book can also serve as a short course in computer science for non-specialists. To top it off, What Is Thought? is beautifully written and will be at least as clear and accessible to the intelligent lay public as Scientific American .& quot; -- David Waltz, Director, Center for Computational Learning Systems, Columbia University

& quot; This book is the deepest, and at the same time the most commonsensical, approach to the problem of mind and thought that I have read. The approach is from the point of view of computer science, yet Baum has no illusions about the progress which has been made within that field. He presents the many technical advances which have been made -- the book will be enormously useful for this aspect alone -- but refuses to play down their glaring inadequacies. He also presents a road map for getting further and makes the case that many of the apparently 'deep' philosophical problems such as free will may simply evaporate when one gets closer to real understanding.& quot; -- Philip W. Anderson, Joseph Henry Professor of Physics, Princeton University, 1977 Nobel Laureate in Physics

.,."[Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science." -- Igor Aleksander, "Nature"

"A book that is admirable as much for its candor as its ambition.... If "What is Thought?" can inspire a new generation of computer scientists to inquire anew about the nature of thought, it will be a valuable contribution indeed."-- Gary Marcus, "Science"

."..[Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science."-- Igor Aleksander, "Nature"

"What's great about this book is the detailed way in which Baum shows the explanatory power of a few ideas, such as compression of information, the mind and DNA as computer programs, and various concepts in computer science and learning theory such as simplicity, recursion, and position evaluation. "What Is Thought?" is a terrific book, and I hope it gets the wide readership it deserves."--Gilbert Harman, Department of Philosophy, Princeton University

"Eric Baum's book is a remarkable achievement. He presents a novel thesis--that the mind is a program whose components are semantically meaningful modules--and explores it with a rich array of evidence drawn from a variety of fields. Baum's argument depends on much of the intellectual core of computer science, and as a result the book can also serve as a short course in computer science for non-specialists. To top it off, "What Is Thought?" is beautifully written and will be at least as clear and accessible to the intelligent lay public as "Scientific American"."--David Waltz, Director, Center for Computational Learning Systems, Columbia University

"This book is the deepest, and at the same time the most commonsensical, approach to the problem of mind and thought that I have read. The approach is from the point of view of computer science, yet Baum has no illusions about the progress which has been made within that field. He presents the many technical advances which have been made--the book will be enormously useful for this aspect alone--but refuses to play down their glaring inadequacies. He also presents a road map for getting further and makes the case that many of the apparently 'deep' philosophical problems such as free will may simply evaporate when one gets closer to real understanding."--Philip W. Anderson, Joseph Henry Professor of Physics, Princeton University, 1977 Nobel Laureate in Physics

"There is no problem more important, or more daunting, than discovering the structure and processes behind human thought. "What Is Thought?" is an important step towards finding the answer. A concise summary of the progress and pitfalls to date gives the reader the context necessary to appreciate Baum's important insights into the nature of cognition."--Nathan Myhrvold, Managing Director, Intellectual Ventures, and former Chief Technology Officer, Microsoft

"A book that is admirable as much for its candor as its ambition.... If "What is Thought?" can inspire a new generation of computer scientists to inquire anew about the nature of thought, it will be a valuable contribution indeed." Gary Marcus Science

."..[Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science." Igor Aleksander Nature

A book that is admirable as much for its candor as its ambition.... If "What is Thought?" can inspire a new generation of computer scientists to inquire anew about the nature of thought, it will be a valuable contribution indeed.--Gary Marcus "Science "

...[Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science.--Igor Aleksander "Nature "

... [Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science.--Igor Aleksander "Nature "

A book that is admirable as much for its candor as its ambition.... If What is Thought? can inspire a new generation of computer scientists to inquire anew about the nature of thought, it will be a valuable contribution indeed.

--Gary Marcus "Science "

... [Should] engage general readers who wish to enjoy a clear, understandable description of many advanced principles of computer science.

--Igor Aleksander "Nature "

About the Author

Eric B. Baum has held positions at the University of California at Berkeley, Caltech, MIT, Princeton, and the NEC Research Institute. He is currently developing algorithms based on Machine Learning and Bayesian Reasoning to found a hedge fund.

Customer Reviews

There are no customer reviews yet on Amazon.co.uk.
5 star
4 star
3 star
2 star
1 star

Most Helpful Customer Reviews on Amazon.com (beta)

Amazon.com: 4.0 out of 5 stars 24 reviews
5.0 out of 5 stars Wordy but Excellent 3 Jun. 2012
By Scott Roberts - Published on Amazon.com
Format: Paperback Verified Purchase
There are very many, very important, and non-trivial ideas in this wordy tome that are an excellent compliment to the ideas in the excellent, popular, and shorter "On Intelligence" by Jeff Hawkins, founder of Palm, Numenta etc. They are completely different books and ideas, but they are the only ones addressing my (and many others) questions and interests very directly. Of the two, I might say "On Intelligence" is "better" but only because it is written so well and the ideas are easier and applicable to what laymen and even professionals are seeking in brain books. "On Intelligence" is to "brain philosophers" (be they programmers or neurosurgeons) what "Brief History of Time" is to armchair physicists and cosmologists. The thoughts in this book are just as important in understanding intelligence but in a more GENERAL way than "merely" trying to understand the brain. It is truly about "thought" in human-made machines, cellular biology, and brains. It is much more challenging because the ideas are more complex and it provides MUCH more detail. So in one sense it is not a fair comparison, but my useful take-aways from each book are similar in value. "On Intelligence" was very brief and simplistic, but important.

"What Is Thought?" takes on conceptually harder and more general questions for which it has answers and suggestions, although lacking in brevity, clarity, and precision. I view it as describing boundary conditions and therefore giving several litmus tests when trying to determine if other ideas concerning "brain machines" are going to be fruitful. I say "brain machines" instead of "artificial intelligence" because my definition of "A.I." is literal and therefore it applies to all programs running on all machines, achieving tasks with astounding efficiency just as artificial muscles of various types have been doing for the past 100+ years.

It appears the philosophically-minded reviewers think "What Is Thought?" is trying to be a proof of various philosophical and scientific views, and therefore think it has partially failed.

He explains numerous programming tools well enough for the book's purpose. Some of the time it seemed like more detail than most will be interested in, causing me to lose sight of what over-aching lesson all this detail was meant to exemplify. For example, there is a long chapter mostly on the game of Go with seemingly similar ideas repeated many times in different ways in different sections. His scattered summaries used different words and sentences for what appeared to be the same idea, making it unclear if the words and sentences were meant to convey subtle but potentially important differences in meaning. These summaries were mixed in with details of the programming challenges and methods used, making it difficult for me to decide if he was not writing clearly and succinctly enough or if I just wasn't smart enough to catch most of the over-arching ideas he was presenting. All other chapters went much better.

These complaints are relatively minor compared to the value of the book. If a new PhD began a teaching career, he could spend that career rewriting these ideas (and a few others) into a 2-year textbook, which is probably what most of us wish this book was: better organized, not repetitive, more precise and clear, and chunked into more chapters with more specific take-aways that could meld together in a structure like other science textbooks. If the same ideas could have been expressed in 1/3 as many pages, I would say it was one of the best important science books ever.

He covers an important and often missing middle ground between the known details of how the brain works, philosophical ponderings, and A.I. research papers. We know these areas have been trying to merge and that understanding one will help the others. Where have the writers been the past 40 years who should have been able to give a rough road map of what the middle ground should look like? Only after about the year 2000 have we lay people been getting exciting pieces of it. I know of two books for far. Where are the other books to expound, add to, or contradict these ideas? (if you have an answer, please post it in a comment below so that I can make a purchase!)

Some takeaways for me are:

1) compression = intelligence (it's not stated this simply, but the strong connection is clear)
2) DNA = compressed algorithmic source code
3) brain = executable

These last two have an interesting and powerful truth, and are based on 1) which is a deep truth (see my post in the comments section), but I disagree with his thesis that we need to read the source code (DNA) instead of the executable (the brain) in order to understand what the brain is doing, mainly because his gives no plausible way of making sense of the source code (DNA) which has very complex dependencies between genes (very high compression not meant for readability) and with the environment and non-DNA-derived compounds. It would be like trying to read a compressed version of someone's source code and yet we don't know what compression algorithm was used.

4) REGEXs are turing complete (a nice surprise for me, as a lover of Perl REGEXs. Now I know why I love them)

5) An Earth-wide system of chemical reactions (calculations) leading to and advancing evolution is a fantastic amount of computing power that lead to the restricted type of turing machines we call brains (rather like the Earth computational machine in the Hitchhiker's Guide to the Galaxy, and the claim that Arthur's brain would have the question to the ultimate answer of "42") and ...

6) As with "on intelligence" this books explains that brains are in some sense a mirror of the reality that they are trying to model, except that the two books take opposite views: Hawkins thinks the brain models the real world and that some really smart people have gone down the rabbit hole that Baum is in. Baum believes the reality we see is no more than the models our brains have constructed. He explicitly says aliens do not need to have models that are compatible with any electron-like idea to possibly be just as knowledgeable about the world, which I believe means having equal power to predict.

7) Unlike other reviewers, I am not going to say that Baum said the brain is running modules of algorithms. The physical brain *IS* the algorithm, or rather, the executable. There is no hidden software like code on a hard drive that is creating a separation between body and mind like other reviewers imply. The "AND, OR and NOT" gates of the dendrites and axons are all "fixed" (for the most part) over the course of a single thought. ( Synapses can change surprisingly fast, so I'm fudging. )

8) No one seems to mention what Baum said was his main thesis and reason for the book:
"All thought is about semantics. Semantics is equivalent to capturing and exploiting the complex structure of the world." He did not define "semantics", so I am left with rewriting his quote: "All thought is about capturing and exploiting the complex structure of the world." Notice that this implies reacting to ("exploiting") the world which "On Intelligence" also says is key (Hawkins call's it generation mode). I view this "motor control" requirement as simply being able to decompress high-level ideas in our head into action, which therefore means the compression must be reversible. To me this does not seem to be a great idea, but rather obvious: if you can't decompress it, did you really compress it correctly in the first place? Granted, decompression to muscle output uses different pathways than compression from sensory input, and they may interact and be key to training each other, but I see nothing interesting here in relation to my more general desire to know about intelligent thinking on a machine that does not necessarily have motor or even any output.

9) "Memorization and understanding are at opposite extremes." What a powerful idea! But what exactly does he mean? How do I distinguish between rote memorization and understanding? I think he means a system that merely memorizes the patterns seen during training will have no understanding of how the data was generated and will not be able to recognize and predict slightly different patterns. Another analogy would be being able to write down physics equations but not knowing what they mean or how to use them. I can imagine someone being full of facts but not being able to tie them into a bigger picture. But by the same token, a great memory can be achieved if it fits into an existing theoretical framework, which means new data fits into an already-discovered compression scheme. So the net effect is a great memory that results from great understanding, not an over-sized memory that blocks or simply lacks "understanding", whatever that means. This is a great example where the reader has to use a fair amount of judgement in interpreting and understanding what he is writing and not carrying his comments too far. This item presents a challenge to my support of the compression = intelligence viewpoint. Is he saying a lossless compression scheme that merely memorizes exact patterns (that occurred more than once in the data stream) has no understanding of the data or the data source? Does it need to be an compression scheme that discovers a model of what generated the data, and not merely compress the data with blind "tricks", or are the "tricks" really understanding in a different (mathematically transformed) way that is just as valid as long as it does its job? When we look at reality are we seeing a reflection of our brain structure that is modelling some directly-unknowable world, or does our brain structure really reflect and give us a true understanding of the structure of reality? In other words, are we compressing reality in a brain-convenient way without having a clue as to how it was really generated behind the "veil of our senses"? This last type of Hume philosophical thinking concerning the veil of the senses is what helped Eistein discover (or invent!) relativity, as stated by Einstein.

10) Occam's razor is extremely important in intelligence for the same reason it is important in physics. A simpler "equation" (aka compression scheme) means there are fewer constraints on what the equation thinks is possible and can therefore predict or generate. The underlying reality of the world appears to be based on simpler and simpler equations, the more we learn about the underlying data generator. For example, all physics equations except for a few can be derived from quantum electrodynamics.

p39 Turing machines that are capable of Turing completeness can't compute all real world laws. The brain may use statistical methods that are not finite and thus can do more than a finite Turing machine.

p161 evolution has used a reinforcement "algorithm" that increasingly improves the design of the brain through replicating, reusing, and mutating code, and the brain in turn is a similar system that uses rewards to evolve better and better algorithms. This is not exactly what he says, but what he says is too messy to be translated, even as he says it's key to his thesis.

p161 a key to building a brain that understands is that it should efficiently earn rewards
p162 "Reuse of code is compression."
p164 "Computer science consists largely of a collection of techniques for exploiting certain kinds of structure found in various problems."

p166 The program of mind evolved not only to sense and react, but to have a compact representation of the myriad states of the world and rapidly compute what actions to take by exploiting structure in the representation. The result is an extremely compressed code that can be trained on vast amounts of data and the compressed code is so enormously constrained that it can generalize to new circumstances. [ I prefer to say the small size UNCONSTRAINS what the compressed code can do. For example, F=ma allows more possibilities than F=ma+c, whatever constant or variable "c" might be. ]

p170 "by finding a compressed program consistent with a mass amount of data, evolution has learned to exploit the structure of the world and has produced a program [encoded into DNA] with syntactic [capable of being relatable to each other in space, time, and/or other "dimensions"] "OBJECTS" corresponding to real objects in the world."

p171 to recognize a variety of "chairs" a program should be able to test if it is something that can be sat upon.

p173 the picture of the world as composed of objects, counting, exploiting causality, and rotation, are examples of DNA hard coded programs of the mind

p177 people use and learn algorithms that can assemble other algorithms and can be built from smaller algorithms.

p205 programs that understand will only be created by intense computation learns and optimizes on its own rather than by direct programming. "A hallmark of human analysis is that it is causal....No one knows how to reproduce this in a computer program."

p206 the problem with hand coded AI programs is that they are not the result intensive optimization algorithms. When the problem space is small, hand coded AI can do much better than humans with little programming. When the problem space gets a little larger like chess or the traveling salesman problem, it takes a lot of programming effort to surpass humans. Human thought is not able to directly access the methods by which it operates and output that compression technology into computer code without studying the brain itself and copying it or doing something like it, such as an economic system that learns the best compression techniques on its own.

p208 "I have argued that when code results from intense optimization done the right way, correspondence to semantics occurs through occam's razor"

p208 the brain is so powerful, we are restricted to small problem spaces but (p210) although this reductionist method has been great for science, it does not seem to be effective for creating intelligence. Dividing up problems throws away structure that makes it more difficult to solve the problem space, or even the pieces of the problem space that results. P212 concepts in our minds reflect structure in the world, much of which is coded into our DNA. Aliens may have a different compression scheme such that the concept of an electron is not needed to make them at least as powerful as us.

p209 since so much evolutionary power has been used to create the brain, we can't hope to create an artificial evolutionary environment that will produce something as effective (he does not here suggest the possibility of simply copying the brain) but p211 he has attempted to create evolutionary environments that can achieve understanding by NOT dividing up the problem space.

He goes into detail about the development of intelligence as being aided by economic principles, which is my primary interest, but I see some problems in his approach that I think are crucial. I'll add this to the review or comments later.

I'll edit this review as I review the first half of the book and finish the and half in 2012.

I'll post a long "supplemental" to this review in the comments.
1.0 out of 5 stars A failed Hard-core-Science attempt to explain Thought 9 Nov. 2015
By Justin Time - Published on Amazon.com
Format: Paperback Verified Purchase
With academic background in Physics I had high hopes for this book by a hard-core scientist in getting a scientific sense as to what Thoughts may be well - thought of - in scientific terms. Yet it was a disappointment - the writer goes round and round trying to figure out what he wants to say and throws at you a lot of circular ideas and mixes up information form various fields to explain Thought..
I came out exhausted and wanting much more about the subject. The book was a lot of work to write no doubt but missed the mark by a long shot in my opinion.
5.0 out of 5 stars A truly amazing book 22 May 2011
By Christopher Vitale - Published on Amazon.com
Format: Paperback Verified Purchase
As a professor of media theory/philosophy, I've got to say, this book has had a huge impact on my own work and thought. It's really incredible, underrated, and there's nothing to equal it in print right now. Get it now! This book is encyclopedic, thorough, well written, and simply one of the best books on the subject ever written.

It should be made clear this isn't a philosophy book, in the sense that it doesn't attempt to address theories on the question of 'What is Thought?' from figures like Plato or Lao-Tzu to the present. Rather, it's an attempt to answer the question scientifically. From that perspective, there's no equal to this book. It teaches you about systems biology, modern cognitive neuroscience, genetics, computer science, artificial neural networks, fuzzy and genetic algorithms, and basically any other scientific approach to the guiding question of this book, and then brings it all together in a grand synthesis. And to Baum, this means that there are many types of thought, and human thought is only one type thereof. The world thinks, in a sense, our bodies think, evolution thinks, and our minds think.

I read this very long book cover to cover, and I'm much smarter for it. While I think the title is a little misleading, because there's no traditional philosophy or psychology or any attempt to engage with these approaches at all, for what it aims to do, this book is truly astounding. Just a scan through the table of contents should show you what you're going to learn, and it's a HELL of a lot. And it's well written. Not an easy read, but one that with some patience will teach you things with a great deal of rigor. I might recommend skipping some of the computer science sections (unless P vs. NP problems are really interesting for you), but the book's ability to handle both details and the big picture is really unique. Can't say enough good about this text!
8 of 9 people found the following review helpful
4.0 out of 5 stars fascinating but wrong 23 Nov. 2006
By Paul R. Adams - Published on Amazon.com
Format: Paperback Verified Purchase
Baum's book is always stimulating and in some ways admirable, especially in its insistence that there is nothing magical in the brain. But he's wrong in several crucial ways, the same ways that Pinker gets wrong (for example, in "The Slate's Last Stand").
1. Despite his neural network background, Baum fatally underestimates the power of unsupervised learning. While he's right that complex networks cannot be explicitly trained without astronomically numerous examples, it's now clear that unsupervised learning (where the number of examples is quite literally astronomical) combined with the rather regular (albeit complex) structure of the world, can do most of the heavy lifting, with supervision filling in details. Explaining unsupervised learning to a lay audience is not easy (I know of no successful attempts) but cannot be shirked.
2. Because of his background, Baum fatally overestimates the power of Darwinian evolution. For example, he completely omits the Eigen error threshold problem, he does not take seriously the gap between the information content of genomes and brains, and he seems to think that adding one bit per generation (which is all evolution can do) is a powerful learning procedure.
3. He's hopelessly starry-eyed about the ability of Darwinian evolution to find "compressed descriptions" (though he's spot on in his emphasis on compression). Both evolution and learning are algorithms for adapting, and Baum completely overlooks the possibility that brains can implement the Darwinian algorithm in a different physical medium (synapses instead of nucleotides). To validly draw the conclusions he jumps to, he would have to prove that either the Darwinian algorithm cannot be implemented neurally, or that it would be far too slow (while the evidence suggests that the basic update can be done neurally a billion times faster neurally than genetically). As Dawkins has emphasised, Darwinism is the only way to get intelligence, but this does NOT mean that only DNA can do it.
In sum, a book for the beach, not for eternity.
25 of 27 people found the following review helpful
2.0 out of 5 stars Interesting but replete with hasty argumentation 11 Oct. 2005
By John Harpur - Published on Amazon.com
Format: Hardcover Verified Purchase
The main thesis of this book, asserted repetitively, is that the mind is a computer program. Once this is borne in mind, pardon the alliteration, most of the book is reduced to an argument in its favour, rather than an investigation into its credibility. The book often reaches for blunt assertions to support its positions and only afterwards begins a slight retracing of steps. For example, we are told that inductive bias and learning algorithms are coded into the genome. It is obvious, bit of speculation on DNA, evolution and algorithms and out comes the result!

In his observance of Occam's Razor, the author confuses the appeal of the simplest explanatory hypothesis with the belief that he has found such. The discussion of neural networks leaves aside recurrent networks, which are probably more biologically plausible than competitors.

Likewise the idea that the brain essentially 'runs' compressed programs due to evolutionary endowments is unconvincing and philosophically leaky.

I don't want to be over critical of the book as it has brought together many interesting strands of work, but it just has not woven them into anything interesting. There is little new here, whether from modularity or evolutionary programming constraints on neural activity. A lot of it is speculative and several of the key themes are discordant due to under analysis of their assumptions.

Several of the elaborations verge on the frivolous. For example, there is a particularly woolly argument linking the learning of Scheme to "what goes on in constructing our understanding of the world" (p. 222). Likewsie in discussing awareness and consciousness, the author relies on the use of 'main' in C to metaphorically explain how information might come together in the brain (p. 413-415). All kinds of reification fallacies come to mind, leaving aside the thinnes of the argument.

The bottom line is that the book pursues a strong cognitivist program (the brain is a computer) without convincingly examining various sides of the argument. I was certainly no wiser off at the end of it.
Were these reviews helpful? Let us know


Feedback