Buy Used
Used - Good See details
Price: 28.25

or
 
   
Trade in Yours
For a 9.48 Gift Card
Trade in
Have one to sell? Sell yours here
Sorry, this item is not available in
Image not available for
Colour:
Image not available

 

Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (Computational Neuroscience) [Hardcover]

Peter Dayan
4.0 out of 5 stars  See all reviews (1 customer review)

Available from these sellers.


Formats

Amazon Price New from Used from
Kindle Edition 29.74  
Hardcover --  
Paperback 31.30  
Trade In this Item for up to 9.48
Trade in Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (Computational Neuroscience) for an Amazon Gift Card of up to 9.48, which you can then spend on millions of items across the site. Trade-in values may vary (terms apply). Learn more

Book Description

3 Dec 2001 0262041995 978-0262041997
Theoretical neuroscience provides a quantitative basis for describing what nervous systems do, determining how they function, and uncovering the general principles by which they operate. This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory. The book is divided into three parts. Part I discusses the relationship between sensory stimuli and neural responses, focusing on the representation of information by the spiking activity of neurons. Part II discusses the modeling of neurons and neural circuits on the basis of cellular and synaptic biophysics. Part III analyzes the role of plasticity in development and learning. An appendix covers the mathematical methods used, and exercises are available on the book's Web site.


Product details

  • Hardcover: 476 pages
  • Publisher: MIT Press (3 Dec 2001)
  • Language: English
  • ISBN-10: 0262041995
  • ISBN-13: 978-0262041997
  • Product Dimensions: 25.9 x 21.2 x 3.2 cm
  • Average Customer Review: 4.0 out of 5 stars  See all reviews (1 customer review)
  • Amazon Bestsellers Rank: 1,815,868 in Books (See Top 100 in Books)

More About the Author

Discover books, learn about writers, and more.

Product Description

Review

"Not only does the book set a high standard for theoretical neuroscience, it defines the field."-- Dmitri Chklovskii, "Neuron"

About the Author

Peter Dayan is on the faculty of the Gatsby Computational Neuroscience Unit at University College London. L. F. Abbott is the Nancy Lurie Marks Professor of Neuroscience and Director of the Volen Center for Complex Systems at Brandeis University. He is the coeditor of Neural Codes and Distributed Representations (MIT Press, 1999).

Inside This Book (Learn More)
First Sentence
Neurons are remarkable among the cells of the body in their ability to propagate signals rapidly over large distances. Read the first page
Explore More
Concordance
Browse Sample Pages
Front Cover | Copyright | Table of Contents | Excerpt | Index | Back Cover
Search inside this book:

What Other Items Do Customers Buy After Viewing This Item?


Customer Reviews

5 star
0
3 star
0
2 star
0
1 star
0
4.0 out of 5 stars
4.0 out of 5 stars
Most Helpful Customer Reviews
0 of 1 people found the following review helpful
4.0 out of 5 stars Very good! 12 Feb 2013
By Spiros
Format:Paperback
Very useful, especially for someone that is new in the field of Computational neuroscience. I suggest it for reading to anyone
Was this review helpful to you?
Most Helpful Customer Reviews on Amazon.com (beta)
Amazon.com: 4.0 out of 5 stars  13 reviews
51 of 58 people found the following review helpful
4.0 out of 5 stars Good overview 24 May 2003
By Dr. Lee D. Carlson - Published on Amazon.com
Format:Hardcover|Verified Purchase
This book is a detailed overview of the computational modeling of nervous systems from the molecular and cellular level and from the standpoint of human psychophysics and psychology. They divide their conception of modeling into descriptive, mechanistic, and interpretive models. My sole interest was in Part 3, which covers the mathematical modeling of adaptation and learning, so my review will be confined to these chapters. The virtue of this book, and others like it, is the insistence on empirical validation of the models, and not their justification by "thought experiments" and arm-chair reasoning, as is typically done in philosophy.
Part 3 begins with a discussion of synaptic plasticity and to what degree it explains learning and memory. The goal here is to develop mathematical models to understand how experience and training modify the neuronal synapses and how these changes effect the neuronal patterns and the eventual behavior. The Hebb model of neuronal firing is ubiquitous in this area of research, and the authors discuss it as a rule that synapses change in proportion to the correlation of the activities of pre- and postsynaptic neurons. Experimental data is immediately given that illustrates long-term potentiation (LTP) and long-term depression (LTD). The authors concentrate mostly on models based on unsupervised learning in this chapter. The rules for synaptic modification are given as differential equations and describe the rate of change of the synaptic weights with respect to the pre- and postsynaptic activity. The covariance and BCM rules are discussed, the first separately requiring postsynaptic and presynaptic activity, the second requiring both simultaneously. The authors consider ocular dominance in the context of unsupervised learning and study the effect of plasticity on multiple neurons. The last section of the chapter covers supervised learning, in which a set of inputs and the desired outputs are imposed during training.
In the next chapter, the authors consider the area of reinforcement learning, beginning with a discussion of the mathematical models for classical conditioning, and introducing the temporal difference learning algorithm. The authors discuss the Rescorla-Wagner rule , which is a trial-by-trial learning rule for the weight adjustments, in terms of the reward, the prediction, and the learning rate. They then discuss more realistic policies such as static action choice, where the reward/punishment immediately follows the action taken, and sequential action choice, where rewards may be delayed. The authors discuss foraging behavior of bees as an example of static action choice, reducing it to a stochastic two-armed bandit problem. The maze task for rats is discussed as an example of sequential action choice, and the authors reduce it to the "actor-critic algorithm." A generalized reinforcement learning algorithm is then discussed, with the rat water maze problem given as an example.
Chapter 10 is an overview of what the authors call "representational learning", which, as they explain, is a study of neural representations from a computational point of view. The goal is to begin with sensory input and find out how representations are generated on the basis of these inputs. That such representations are necessary is based on for example the consideration of the visual system, since, argue the authors, what is presented at the retina is too crude for an accurate representation of the visual world. The main strategy in the chapter is to begin with a deterministic or probabilistic input and construct a recognition algorithm that gives an estimate of the input. The algorithms constructed are all based on unsupervised learning, and hence the existence and nature of the causes must be computed using heuristics and the statistics of the input data. These two requirements are met via the construction of first a generative model and then a recognition model in the chapter. The familiar 'expectation maximization' is discussed as a method of optimization between real and synthetic data in generative models. A detailed overview of expectation maximization is given in the context of 'density estimation'. The authors then move on to discuss causal models for density estimation, such as Gaussian mixtures, the K-means algorithm, factor analysis, and principal components analysis. They then discuss sparse coding, as a technique to deal with the fact that the cortical activity is not Gaussian. They illustrate an experimental sample, showing the activity follows an exponential distribution in a neuron in the inferotemporal area of the macaque brain. The reader will recognize 'sparse' probability distributions as being 'heavy-tailed', i.e. having values close to zero usually, but ones far from zero sometimes. The authors emphasize the difficulties in the computation of the recognition distribution explicitly. The Olshausen/Field model is used to give a deterministic approximate recognition model for this purpose. The authors then give a fairly detailed overview of a two-layer, nonlinear 'Helmholtz machine' with binary inputs. They illustrate how to obtain the expectation maximization in terms of the Kullback-Leibler divergence. The learning in this model takes place via stochastic sampling and occurs in two phases, the so-called "wake and sleep" algorithm. The last section of the chapter gives a general discussion of how recent interest in coding, transmitting, and decoding images has led to much more research into representational learning algorithms. They discuss multi-resolution decomposition and its relationship to the coding algorithms available.
18 of 22 people found the following review helpful
5.0 out of 5 stars Great textbook and reference 15 Aug 2003
By Geoffrey Goodhill - Published on Amazon.com
Format:Hardcover
This book is certainly the most thorough textbook currently available
on many aspects of computational neuroscience. It works very carefully
through the fundamental assumptions and equations underlying large
tracts of contemporary quantitative analysis in neuroscience. It is
an ideal introductory book for those with a quantitative background,
and is destined to become a standard course book in the field.
21 of 27 people found the following review helpful
4.0 out of 5 stars Theoretical Neurosciences from a Computational Perspective 10 Jun 2004
By Joseph J Grenier - Published on Amazon.com
Format:Hardcover
This text will become a standard course book for Graduate Schools in Computational Neurosciences. You need to know advanced engineering mathematics & probability theory to be able to understand this book. Dayan & Abbott model primary visual cortical, MT, LIP, and Motor cortical neurons as single units, but also as populations (clusters) of firing cells. They discuss Bayes Theorem, probability theory as it applies to the brain, and parietal lobe function as well. They derive all the equations associated with these models for the student so that more advanced parts of the book are comprehensible. The book is not meant to be a general Neuroscience book, but rather a course book about neuronal modeling, computational neurobiology, and neural engineering. It serves these three purposes well. In my opinion, this is the best written account of neuron modeling out there for the graduate student and researcher. Methods in Neuronal Modeling by Christof Koch is the other great book on this subject. If you own these two books you should be able to advance in high level neural modelling. There are numerous equations and formulae of interest throughout each chapter in these two volumes. The price of 39.00 USD for the hardcover is really quite a bargain.
5 of 5 people found the following review helpful
5.0 out of 5 stars Good book for computational neuroscience 28 Jan 2007
By Ed Tan - Published on Amazon.com
Format:Paperback
I am a mathematician and economist interested in how human brain works. To me, (so far) this is the best book using equations to describe the overall picture of brain functions. Even though it might not touch in-depth research topics, I am sure it gives anyone interested in neuroscience very solid foundations on which more advance topics are built. (It actually invites me to more in-depth research topics, such as reinforcement learning, reward-punishment system, etc.)

If math is your familiar language (says, system of differential equations and Bayesian probability), and you are interested to know, in technical details, how the brain functions, this book is for you. Then, I think, you can go into research topics of your interests after finishing reading this book.
4 of 4 people found the following review helpful
4.0 out of 5 stars New Title: Theoretical Neuroscience - Firing Rate Models 27 Feb 2013
By Will Wagstaff - Published on Amazon.com
Format:Paperback|Verified Purchase
While I would like to say that this book is all encompassing, it only briefly touches upon one of the very important camps of computational neuroscience - the spiking models. Be warned that you will be viewing theoretical neuroscience through one lens targeted mainly at firing rates. A brief distinction: spiking models include the dynamic changes of the individual spikes of neurons into neural models, and tend to focus on the contribution of the temporal and electrical components of the neuronal action potentials as they move down the axons and interact with other neurons. Firing rate models condense this spiking behavior into a probability distribution governing the rate at which the neuron fires (think Hertz). This is a fantastically written book, but I would suggest izhikevich's book as a companion.
Were these reviews helpful?   Let us know
Search Customer Reviews
Only search this product's reviews

Customer Discussions

This product's forum
Discussion Replies Latest Post
No discussions yet

Ask questions, Share opinions, Gain insight
Start a new discussion
Topic:
First post:
Prompts for sign-in
 

Search Customer Discussions
Search all Amazon discussions
   


Look for similar items by category


Feedback