5 of 5 people found the following review helpful
on 20 September 2013
This book was given the title it has to sell more copies. Kurzweil doesn't reveal any secrets and doesn't describe any methods that haven't been around for a long time in academia and industry already.
As a software engineer working on pattern recognition systems I bought this book as soon as it was available, the book gave me a lot of ideas and I'm very happy I bought it. The central thesis seems to be the same as Jeff Hawkins' On Intelligence - obviously a big influence for Kurzweil - but with a focus on developments since On Intelligence was published.
Kurzweil got employed at Google very shortly after publishing this book so he could lead a team to create the mind that he's described. He's said in interviews that he left some details out of the book because he didn't want to give too much away.
Overall a good read that will provoke a lot of constructive thought, but don't expect for anyone to actually build a mind based on just this book.
4 of 4 people found the following review helpful
Ever since I read "Singularity is Near" I've been fascinated by Ray Kurzweil - his wirings, ideas, a predictions. He's not been afraid to go on the limb and make some brave and seemingly outlandish forecasts about the upcoming technological advances and their oversize impact on people and society. One of the main reasons why I always found his predictions credible is that they can, in a nutshell, be reduced to just a couple of seemingly simple observations: 1. Information-technological advances are happening exponentially, and 2. Information technology in particular is driving all the other technological and societal changes. The rest, to put it rather crudely, are the details.
In "How to Create a Mind" Kurzweil zeroes in on just one scientific/technological project - creating a functioning replica of the human mind. He uses certain insights from information technology and neurology to propose his own idea of what human mind (and by extension human intelligence) are all about, and to propose how to go about emulating it "in silico." Here too Kurzweil reduces a seemingly intractable problem that the humanity has grappled with for millennia to just a couple of overarching insights. In his view the essence of virtually all cognitive processes can be reduced to the scientific paradigm of "pattern recognition" - an ability of computational agent to identify and classify patterns. And the information theoretical and engineering tool for emulating the kind of pattern recognition that goes on in a mind is the mathematical technique called "hierarchical hidden Markov chains" (HHMS). What gives Kurzweil confidence about this insight and this kind of approach are the successes that he has had in starting and marketing companies which used HHMS for speech and character recognition. Many of these technologies and their derivatives have in recent years made it to the wide ranging set of consumer products (Apple's Siri is just one such example), so it's not surprising that Kurzweil would be feeling exceptionally confident about his insights. However, the history of computation and artificial intelligence is filled with examples of paradigms that seemed promising at one level of "thinking" complexity only to be proven ineffective at tacking more sophisticated problems. Furthermore, even though I am not an expert at neuroscience, Kurzweil's descriptions of what goes on in an actual biological brain come across as not too sophisticated. He is obviously well informed on many neurobiological topics, far above what even a well-educated reader may know, but from what I know about biology the intricacies of the brain are still too complex to be reduced to a simple (simplistic?) model. Kurzweil may still turn out to be right about what he is proposing in this book (and if I had to bet I would loath to bet against him), but the evidence that he presents leaves a lot of potential gaps and pitfalls that would need a lot more convincing to completely bridge.
This is definitely a very well written book with a lot of interesting and though-provoking insights and predictions. Anyone interested in scientific and technological progress in the upcoming years and decades would greatly benefit from reading it, especially since it's such an enjoyable book. I highly recommend it.
6 of 7 people found the following review helpful
on 16 November 2012
*A full executive summary of this book is available at newbooksinbrief dot com.
When IBM's Deep Blue defeated humanity's greatest chess player Garry Kasparov in 1997 it marked a major turning point in the progress of artificial intelligence (AI). A still more impressive turning point in AI was achieved in 2011 when another creation of IBM named Watson defeated Jeopardy! phenoms Ken Jennings and Brad Rutter at their own game. As time marches on and technology advances we can easily envision still more impressive feats coming out of AI. And yet when it comes to the prospect of a computer ever actually matching human intelligence in all of its complexity and intricacy, we may find ourselves skeptical that this could ever be fully achieved. There seems to be a fundamental difference between the way a human mind works and the way even the most sophisticated machine works--a qualitative difference that could never be breached. Famous inventor and futurist Ray Kurzweil begs to differ.
To begin with--despite the richness and complexity of human thought--Kurzweil argues that the underlying principles and neuro-networks that are responsible for higher-order thinking are actually relatively simple, and in fact fully replicable. Indeed, for Kurzweil, our most sophisticated AI machines are already beginning to employ the same principles and are mimicking the same neuro-structures that are present in the human brain.
Beginning with the brain, Kurzweil argues that recent advances in neuroscience indicate that the neocortex (whence our higher-level thinking comes) operates according to a sophisticated (though relatively straightforward) pattern recognition scheme. This pattern recognition scheme is hierarchical in nature, such that lower-level patterns representing discrete bits of input (coming in from the surrounding environment) combine to trigger higher-level patterns that represent more general categories that are more abstract in nature. The hierarchical structure is innate, but the specific categories and meta-categories are filled in by way of learning. Also, the direction of information travel is not only from the bottom up, but also from the top down, such that the activation of higher-order patterns can trigger lower-order ones, and there is feedback between the varying levels. (The theory that sees the brain operating in this way is referred to as the Pattern Recognition Theory of the Mind or PRTM).
As Kurzweil points out, this pattern recognition scheme is actually remarkably similar to the technology that our most sophisticated AI machines are already using. Indeed, not only are these machines designed to process information in a hierarchical way (just as our brain is), but machines such as Watson (and even Siri, the voice recognition software available on the iPhone), are structured in such a way that they are capable of learning from the environment. For example, Watson was able to modify its software based on the information it gathered from reading the entire Wikipedia file. (The technology that these machines are using is known as the hierarchical hidden Markov model or HHMM, and Kurzweil was himself a part of developing this technology in the 1980's and 1990's.)
Given that our AI machines are now running according to the same principles as our brains, and given the exponential rate at which all information-based technologies advance, Kurzweil predicts a time when computers will in fact be capable of matching human thought--right down to having such features as consciousness, identity and free will (Kurzweil's specific prediction here is that this will occur by the year 2029).
What's more, because computer technology does not have some of the limitations inherent in biological systems, Kurzweil predicts a time when computers will even vastly outstrip human capabilities. Of course, since we use our tools as a natural extension of ourselves (figuratively, but sometimes also literally), this will also be a time when our own capabilities will vastly outstrip our capabilities of today. Ultimately, Kurzweil thinks, we will simply use the markedly superior computer technology to replace our outdated neurochemistry (as we now replace a limb with a prosthetic), and thus fully merge with our machines (a state that Kurzweil refers to as the singularity). This is the argument that Kurzweil makes in his new book 'How to Create a Mind: The Secret of Human Thought Revealed'.
Kurzweil lays out his arguments very clearly, and he does have a knack for explaining some very difficult concepts in a very simple way. My only objection to the book is that there is a fair bit of repetition, and some of the philosophical arguments (on such things as consciousness, identity and free will) drag on longer than need be. All in all there is much of interest to be learned both about artificial intelligence and neuroscience. A full executive summary of this book is available at newbooksinbrief dot com; a podcast discussion of the book will be available soon.
1 of 1 people found the following review helpful
on 15 October 2013
Loved this book, it has focused me and moved me forward when I thought there was no where else to go.
2 of 3 people found the following review helpful
on 30 December 2012
Basicly, I think the book consists of three parts:
In the first part of the book we are introduced to pattern recognizers
and how the human neocortex might work. The next part
deals with AI, and inspirations from biology. And, the final chapters are
a tour de force about the future of humanity and human intelligence.
In the first part of the book Kurzweil explains his theory on how pattern processing units in the neocortex can make human thinking possible.
Certainly, skeptics will tell us that Kurzweils explanation is way too simple.
That we are no where near understanding or simulating the human brain. That the brain is simply overwhelming complex, and if you think otherwise, you're fooling yourself...
Well, maybe, but I still think that this part of the book is the best part...!
So, is there a unifying cortical algorithm working inside a uniform cortical anatomy organized into columns and minicolumns (As Kurzweil tells us) ?
Well, probably, the brain employs many different mechanisms, and probably it is all rather complex. Still, with Kurzweils ''simple'' model we can move forward, test the model and improve it. Without any models we are left in the quagmire of ''complexity''....
Surely, there are worse sins that making difficult subjects accessible through brilliant writing, and a few (over) simplifications?
So, I certainly enjoyed the first part tremendously, and will give the book five stars for this part alone.
In the next part of the book, Kurzweil uses his rather simple brain model to convince us that we could have human-like AI by around 2029. And a lot of really interesting AI, inspired by biological principles, way before that.
Well, probably, his models are too simple, and we should add a number of years to his figures. That doesn't invalidate his main argument though (that real artificial intelligences might eventually be built according to the principles that makes biological intelligences work).
And that these non-biological intelligences could be way faster and smarter than biological intelligences (In Kurzweils words: A typical human brain contains about 300 million pattern processing units. But AIs of the future might have billions, meaning that machine intelligence would far exceed the capabilities of the human mind).
In the final part of the book Kurzweil deals with the implications of augmenting human minds and having non-biological super-intelligences around in the future. We have heard much of this before in ''The age of Spiritual Machines'' and in the ''Singularity is Near''.
It is still highly interesting though.
And luckily, as others have observed, Kurzweil is clearly an optimist both in terms of the progress he foresees and its potential impact. If he is even partly right in his predictions then the implications could be staggering.
In a book with such an enormous and breathtaking scope, it should come as no surprise that the chapters are a little bit uneven.
Some chapters cover certain topics in depth while other chapters suffer from a lack of depth.
E.g. I would have liked to read more about the attentional mechanisms in the brain (we were only given a teaser in the chapter about the thalamus), and the chapter about the hippocampus and memory also left a lot of stuff unexplained.
Some of the theories presented in the book could probably also be improved.
Nevertheless, the book is a delight to read and a great inspiration.
on 24 February 2014
A very clear and complete description of how brain works and in what sense technology will enable its emulation. All these, described in a accessible and simple language available to non proficient readers on the topic, something that Kurzweil start mastering with previous publications.
19 of 27 people found the following review helpful
on 6 March 2013
Purchasers of this book would do well to read Colin McGinn's review in the New York Review of Books; here is part of it:
"There is another glaring problem with Kurzweil's book: the relentless and unapologetic use of homunculus language. Kurzweil writes: 'The firing of the axon is that pattern recognizer shouting the name of the pattern: "Hey guys, I just saw the written word 'apple.'"' Again:
"'If, for example, we are reading from left to right and have already seen and recognized the letters "A," "P," "P," and "L," the "APPLE" recognizer will predict that it is likely to see an "E" in the next position. It will send a signal down to the "E" recognizer saying, in effect, "Please be aware that there is a high likelihood that you will see your 'E' pattern very soon, so be on the lookout for it." The "E" recognizer then adjusts its threshold such that it is more likely to recognize an "E."'
"Presumably (I am not entirely sure) Kurzweil would agree that such descriptions cannot be taken literally: individual neurons don't say things or predict things or see things -- though it is perhaps as if they do. People say and predict and see, not little bunches of neurons, still less bits of machines. Such anthropomorphic descriptions of cortical activity must ultimately be replaced by literal descriptions of electric charge and chemical transmission (though they may be harmless for expository purposes). Still, they are not scientifically acceptable as they stand.
"But the problem bites deeper than that, for two reasons. First, homunculus talk can give rise to the illusion that one is nearer to accounting for the mind, properly so-called, than one really is. If neural clumps can be characterized in psychological terms, then it looks as if we are in the right conceptual ballpark when trying to explain genuine mental phenomena -- such as the recognition of words and faces by perceiving conscious subjects. But if we strip our theoretical language of psychological content, restricting ourselves to the physics and chemistry of cells, we are far from accounting for the mental phenomena we wish to explain. An army of homunculi all recognizing patterns, talking to each other, and having expectations might provide a foundation for whole-person pattern recognition; but electrochemical interactions across cell membranes are a far cry from actually consciously seeing something as the letter 'A.' How do we get from pure chemistry to full-blown psychology?
"And the second point is that even talk of 'pattern recognition' by neurons is already far too homunculus-like for comfort: people (and animals) recognize patterns -- neurons don't. Neurons simply emit electrical impulses when caused to do so by impinging stimuli; they don't recognize anything in the literal sense. Recognizing is a conscious mental act. Neither do neurons read or understand -- though they may be said to simulate these mental acts.
"Here I must say something briefly about the standard language that neuroscience has come to assume in the last fifty or so years (the subject deserves extended treatment -- McGinn ignores the fact that Bennett and Hacker have already done this (see the reference below); RL). Even in sober neuroscience textbooks we are routinely told that bits of the brain 'process information,' 'send signals,' and 'receive messages' -- as if this were as uncontroversial as electrical and chemical processes occurring in the brain. We need to scrutinize such talk with care. Why exactly is it thought that the brain can be described in these ways? It is a collection of biological cells like any bodily organ, much like the liver or the heart, which are not apt to be described in informational terms. It can hardly be claimed that we have observed information transmission in the brain, as we have observed certain chemicals; this is a purely theoretical description of what is going on. So what is the basis for the theory?
"The answer must surely be that the brain is causally connected to the mind and the mind contains and processes information. That is, a conscious subject has knowledge, memory, perception, and the power of reason -- I have various kinds of information at my disposal. No doubt I have this information because of activity in my brain, but it doesn't follow that my brain also has such information, still less microscopic bits of it. Why do we say that telephone lines convey information? Not because they are intrinsically informational, but because conscious subjects are at either end of them, exchanging information in the ordinary sense. Without the conscious subjects and their informational states, wires and neurons would not warrant being described in informational terms.
"The mistake is to suppose that wires and neurons are homunculi that somehow mimic human subjects in their information-processing powers; instead they are simply the causal background to genuinely informational transactions. The brain considered in itself, independently of the mind, does not process information or send signals or receive messages, any more than the heart does; people do, and the brain is the underlying mechanism that enables them to do so. It is simply false to say that one neuron literally 'sends a signal' to another; what it does is engage in certain chemical and electrical activities that are causally connected to genuine informational activities.
"Contemporary brain science is thus rife with unwarranted homunculus talk, presented as if it were sober established science. We have discovered that nerve fibres transmit electricity. We have not, in the same way, discovered that they transmit information. We have simply postulated this conclusion by falsely modelling neurons on persons. To put the point a little more formally: states of neurons do not have propositional content in the way states of mind have propositional content. The belief that London is rainy intrinsically and literally contains the propositional content that London is rainy, but no state of neurons contains that content in that way -- as opposed to metaphorically or derivatively (this kind of point has been forcibly urged by John Searle for a long time).
"And there is theoretical danger in such loose talk, because it fosters the illusion that we understand how the brain can give rise to the mind. One of the central attributes of mind is information (propositional content) and there is a difficult question about how informational states can come to exist in physical organisms. We are deluded if we think we can make progress on this question by attributing informational states to the brain. To be sure, if the brain were to process information, in the full-blooded sense, then it would be apt for producing states like belief; but it is simply not literally true that it processes information. We are accordingly left wondering how electrochemical activity can give rise to genuine informational states like knowledge, memory, and perception. As so often, surreptitious homunculus talk generates an illusion of theoretical understanding."
The rest can be accessed here:
on 23 April 2015
Re syntels: smarts do not equate to volition per se, benign or malign; if we are not germane, syntels may just be indifferent.
on 4 May 2014
Reveals all the basic things in life that we take for granted. One of the best books I have ever read in my life.
3 of 5 people found the following review helpful
on 5 September 2014
I was once a `singularitarian', but when Ray Kurzweil failed to predict the financial clash of 2008, in The Singularity is Near, I wondered. The book even has a financial graph, with curves going upwards `exponentially'... Another buzz-word I learnt from Ray Kurzweil and my hero, Terence McKenna. I'm having second thought about being a disciple now!
The psychedelic philosopher, Robert Anton Wilson, once joked that a disciple is an s looking for a b to attach itself to! We can learn much from b's and disciples, like myself, and so without further ado, let us examine many b's and hols from the last few decades; to taste our present singularity religion.
Let us use a few examples of b's and s'holes from the archives of history to see how this singularity got started (this is more Terence McKenna's version, but both McKenna and Kurzweil overlap in many ways). In the 1950's, a stiff academic called Richard Alpert, wasting his life away at Harvard University by running rats around a maze, discovered a magical potion that lifted a strip off of the great veil. He told his colleagues, one being Dr Timothy Leary, mentioned above, and off they went to found a new cultural revolt; the LSD revolt. Dr Leary's vehicle of preference was chemical LSD, off course, because of the mind expanding quality of the drug. Indeed, chemical LSD would expand Tim's mind, like a big balloon, to see over the game. The game being the military industrial complex and monkey politics that millions from that youthful generation realised was a con. Instead of war, this generation tuned into cosmic consciousness, free love and hot pants. So what was all that about?
Fast forward 30 odd years, to the late 1990's, and the LSD revolt gave birth to electronic LSD, that is, cyberspace. Fast forward another 20 years and this is the world we are now living in and we are indeed in a Technicolor wonder-land. This wonder-land is supposed to be the measure of all happiness. These days we are happy with our technology, and rightly so. We have Android phones that allow us to hold the net in the palm of our hand. Internet highways allow us to meet interesting people - the people I meet online are far more interesting than the people I meet in my home town-, and our computer games throw us all into wondrous realities; other worlds, with virtual galaxies that allow all, including the poorest among us, to be who the heck we want to be; without being judged by the authorities or nosey neighbours etc.
You get the picture; obviously, because this is our 21st century; lit up in neon lights. And it gets better; people like the silicon valley entrepreneur, Ray Kurzweil, promise that our techno-smarts is only the beginning and sometime soon, we will all be enjoying the equivalent of a technological orgasm. Kurzweil and his followers call this future state, the singularity. The singularity is the eschatology with batteries, a bit like the apocalypse, but rather than fire and burning in Hell for a zillion years, we all get our consciousness uploaded into a virtual reality existence and be immortal and live in a 4D mansion, with a 3D TV implanted inside our brain. This technological Nirvana is taken seriously by allot of very clever people!
All this hubris is drowning out the descending voices, like that of the guy who coined the word `virtual reallity', Jaron Lanier. Lanier who is arguing that today's cyberspace is moving you and I towards a new techno-serfdom; a psychedelic cage that will strip our humanness and turn us all into gadgets! This is a different picture from the romantic hopes of 1990's cyberspace. Indeed, when I was in school, I myself believed in this type of techno progress.
Back in the 1990's, my friends and I would listen to the futurist philosopher, Terence McKenna, give spellbinding talks on the subject of the then embryonic information super highway. McKenna was convinced of the Utopian possibilities of the Internet. Cultural free for-alls and other fun ontology's promised by the Internet would free our minds from the work-cycle, awaken the collective unconscious, demolish the cultural pillars of Christian civilisation and kick the doors off heavens hinges; phew!. This brave new world was going to herald the cultural singularity and the new dawn; and finally, we were all to transcend to silicon light, (You had to be there I guess). According to McKenna and indeed Jaron Lanier -and most silicon entrepreneurs at the time- the Internet will allow us all an existence in the radiant afterglow of a post-western civilisation. Capitalist values will be swept away, along with adverts and 'male dominator' politics, "We'll go there and we'll leave the Earth and dance forever in the hallways of the astral imagination" (McKenna)! Jaron Lanier now admits this was foolish and he's trying to warn us all before 'lock in' will halt our humanness and turn us all into slaves.
In his book, You Are Not a Gadget, Lanier is arguing that if we fast forward 20 odd years from now, then capitalism is indeed wobbling at the foundations (but not at the top you see). This means that we serfs are suffering down bellow, toiling as we always have done, unpaid and unrecognized; and it gets worse. While we work for nothing, like when we write unpaid reviews on Amazon or 'help' Wikipedia, the 'lords of the clouds' have monopolised the creative surplus and are squeezing the masses until the pips squeak! Only the lucky few who control the means of production reap the money harvest, whilst we serfs toil away in cyberspace, unpaid and de-personalised in the gas of collective surfing; billions of gremlins looking at a screen is our future. Jaron Lanier is no Luddite and he personally knew Terence McKenna and Tim Leary and all the movers from the idealistic 1990's, and this is why his book is essential for our future. It's a warning like Huxley and Orwell, but not as happy. Let us hope that Jaron Lanier will be as wrong about his negativity for the future as McKenna was wrong about his utopianism. Only time will tell.
That was a downer... wasn't it?