Bayesian Reasoning and Machine Learning and over 2 million other books are available for Amazon Kindle . Learn more


or
Sign in to turn on 1-Click ordering.
Trade in Yours
For a 11.45 Gift Card
Trade in
More Buying Choices
Have one to sell? Sell yours here
Sorry, this item is not available in
Image not available for
Colour:
Image not available

 
Start reading Bayesian Reasoning and Machine Learning on your Kindle in under a minute.

Don't have a Kindle? Get your Kindle here, or download a FREE Kindle Reading App.

Bayesian Reasoning and Machine Learning [Hardcover]

David Barber
5.0 out of 5 stars  See all reviews (2 customer reviews)
RRP: 45.00
Price: 38.25 & FREE Delivery in the UK. Details
You Save: 6.75 (15%)
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
Only 10 left in stock (more on the way).
Dispatched from and sold by Amazon. Gift-wrap available.
Want it Tuesday, 22 April? Choose Express delivery at checkout. Details

Formats

Amazon Price New from Used from
Kindle Edition 27.81  
Hardcover 38.25  
Trade In this Item for up to 11.45
Trade in Bayesian Reasoning and Machine Learning for an Amazon.co.uk gift card of up to 11.45, which you can then spend on millions of items across the site. Trade-in values may vary (terms apply). Learn more

Book Description

2 Feb 2012 0521518148 978-0521518147
Machine learning methods extract value from vast data sets quickly and with modest resources. They are established tools in a wide range of industrial applications, including search engines, DNA sequencing, stock market analysis, and robot locomotion, and their use is spreading rapidly. People who know the methods have their choice of rewarding jobs. This hands-on text opens these opportunities to computer science students with modest mathematical backgrounds. It is designed for final-year undergraduates and master's students with limited background in linear algebra and calculus. Comprehensive and coherent, it develops everything from basic reasoning to advanced techniques within the framework of graphical models. Students learn more than a menu of techniques, they develop analytical and problem-solving skills that equip them for the real world. Numerous examples and exercises, both computer based and theoretical, are included in every chapter. Resources for students and instructors, including a MATLAB toolbox, are available online.

Frequently Bought Together

Bayesian Reasoning and Machine Learning + Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning Series) + Pattern Recognition and Machine Learning (Information Science and Statistics)
Price For All Three: 143.73

Buy the selected items together


Product details

  • Hardcover: 708 pages
  • Publisher: Cambridge University Press (2 Feb 2012)
  • Language: English
  • ISBN-10: 0521518148
  • ISBN-13: 978-0521518147
  • Product Dimensions: 24.6 x 18.8 x 4.1 cm
  • Average Customer Review: 5.0 out of 5 stars  See all reviews (2 customer reviews)
  • Amazon Bestsellers Rank: 32,887 in Books (See Top 100 in Books)
  • See Complete Table of Contents

More About the Author

Discover books, learn about writers, and more.

Product Description

Review

'This book is an exciting addition to the literature on machine learning and graphical models. What makes it unique and interesting is that it provides a unified treatment of machine learning and related fields through graphical models, a framework of growing importance and popularity. Another feature of this book lies in its smooth transition from traditional artificial intelligence to modern machine learning. The book is well-written and truly pleasant to read. I believe that it will appeal to students and researchers with or without a solid mathematical background.' Zheng-Hua Tan, Aalborg University, Denmark

'With approachable text, examples, exercises, guidelines for teachers, a MATLAB toolbox and an accompanying website, Bayesian Reasoning and Machine Learning by David Barber provides everything needed for your machine learning course. Only students not included.' Jaakko Hollmén, Aalto University

'The chapters on graphical models form one of the clearest and most concise presentations I have seen … The exposition throughout uses numerous diagrams and examples, and the book comes with an extensive software toolbox - these will be immensely helpful for students and educators. It's also a great resource for self-study.' Arindam Banerjee, University of Minnesota

'I repeatedly get unsolicited comments from my students that the contents of this book have been very valuable in developing their understanding of machine learning … My students praise this book because it is both coherent and practical, and because it makes fewer assumptions regarding the reader's statistical knowledge and confidence than many books in the field.' Amos Storkey, University of Edinburgh

Book Description

This practical introduction for final-year undergraduate and graduate students is ideally suited to computer scientists without a background in calculus and linear algebra. Numerous examples and exercises are provided. Additional resources available online and in the comprehensive software package include computer code, demos and teaching materials for instructors.

Inside This Book (Learn More)
Browse Sample Pages
Front Cover | Copyright | Table of Contents | Excerpt | Index
Search inside this book:


Customer Reviews

4 star
0
3 star
0
2 star
0
1 star
0
5.0 out of 5 stars
5.0 out of 5 stars
Most Helpful Customer Reviews
5 of 6 people found the following review helpful
5.0 out of 5 stars Good Book for Machine Learning 21 Nov 2012
Format:Hardcover|Verified Purchase
I am doing course related to machine learning and this book helps me with the study. I also have bishop but that book has too much math and I cannot follow. This one is explained in the simpler way. Less proof. Give you the intermediate detail on topics. Not for beginner or expert.
Comment | 
Was this review helpful to you?
6 of 8 people found the following review helpful
5.0 out of 5 stars Great Book! 16 July 2012
By Rosh
Format:Hardcover|Verified Purchase
A very clear and concise book. I highly recommend this it.The Concepts are explained well with sufficient examples. One of the best books out there for machine learning.
Comment | 
Was this review helpful to you?
Most Helpful Customer Reviews on Amazon.com (beta)
Amazon.com: 4.5 out of 5 stars  10 reviews
83 of 85 people found the following review helpful
5.0 out of 5 stars Brilliant and accessible 17 May 2012
By T. Triche - Published on Amazon.com
Format:Hardcover
Don't take my word for it, though; read the book online. For some reason Amazon decided to delete the URL, so just do a search for David Barber and go to his home page at UCL (University College London), where links to a PDF of the book and to recent publications of his can be found.

Barber has done an excellent job of making extremely complex and contemporary ideas accessible to anyone with a reasonable mathematical background, and he puts them in context ("these techniques can be applied to finance, biology, and speech recognition"... para). Read through it and see for yourself. I find this book more accessible than Daphne Koller & Nir Friedman's (also excellent) text, Probabilistic Graphical Models, despite my immense respect for the authors of the latter.
39 of 40 people found the following review helpful
5.0 out of 5 stars Extremely Suitable for Self Study 6 Dec 2012
By Let's Compare Options - Published on Amazon.com
Format:Hardcover|Verified Purchase
Unlike many (most?) books and courses on machine learning, Barber's outstanding text is very suitable for self study. There are many reasons for this, and high among them is the fact that he carefully explains, with commonsense examples and applications, many of the tougher logical, mathematical and processing foundations of pattern recognition.

For relative beginners, Bayesian techniques began in the 1700s to model how a degree of belief should be modified to account for new evidence. The techniques and formulas were largely discounted and ignored until the modern era of computing, pattern recognition and AI, now machine learning. The formula answers how the probabilities of two events are related when represented inversely, and more broadly, gives a precise mathematical model for the inference process itself (under uncertainty), where deductive reasoning and logic becomes a subset (under certainty, or when values can resolve to 0/1 or true/false, yes/no etc. In "odds" terms (useful in many fields including optimal expected utility functions in decision theory), posterior odds = prior odds * the Bayes Factor.

For context, I'm the lead scientist at IABOK dot org-- we design algorithms for huge data mining problems and applications. This text is our "go to" reference for programmers not up to speed in many of the new pattern recognition algorithms, including those writing new versions. All the most recent relevant models, from a probability standpoint, are represented here, with a clarity that is stunning. My only criticism (a mild one) is that, when applying Barber's examples to Bodies of Knowledge and data mining, he skips Prolog, backward chaining, predicate calculus and other techniques that are the foundation of automated inference systems (systems that extend knowledge bases automatically by checking whether new propositions can be inferred from the KB as consistent, relevant, etc.).

In the next 20 years, algorithms will rule this planet. If you either want to see the future of your grandkids, or participate in it if you're young, this is a MUST HAVE exploration of where what we used to call AI is now headed. There IS plenty of calculus in this volume, so don't mistakenly think it is "simple" -- but if you put the time in, you can "get it" even if you're a bright undergrad level thinker. The author's goal of training new algorithm programmers is laudable and right on point for where pattern recognition is headed.

With this amount of math, how can we star it high for self study? Easy: unlike most "recipe" books that just give bushels of codes or techniques, the authors here give the what, where when and why of both code and math, not just the how, as their goal is independent, creative contributors who can write their OWN algorithms. There are a few minor UK vs US differences in terminology also (event space instead of sample space, for example), but they expand the reader's horizon rather than distract or annoy as some others do. There are others like Bishop and many more that have more recipes, and more compact and difficult math, but you have to either be really good (just show me the recipe) or really bad (I don't know what I'm doing, but can follow this recipe) to benefit from them. This is a happy middle ground that does not disappoint.
23 of 25 people found the following review helpful
5.0 out of 5 stars Great effort put in explanations and examples 13 Jun 2012
By Vladislavs Dovgalecs - Published on Amazon.com
Format:Hardcover
First I would like to thank authors (and the publishing house) for giving free PDF of the complete book. That's a really kind of them. Knowing its contents, that motivated me to buy a hard copy for my library.

Now to the content. The author has done a great job in introducing probabilistic concepts and pushing forward to more advanced and practically interesting techniques. There are many examples in the text that often help to grasp the workings of a method or an approach.

For me who has very little background in probabilistic methods this is a real textbook. I am still reading it chapter by chapter and can recommend it as a reading for advanced undergraduate, graduate and pHD students. The material in each chapter is well introduced and motivated, equations are just in time with variables and transformations explained, and with numerous exercises at the end of each chapter. All this makes a book almost self-consistent onto at least a semester can be taught.

The book is also accompanied with MATLAB code to which author refers to at the end of each chapter. The code is organized as a toolbox of functions with demos for each chapter. This allows to apply the acquired knowledge on your own problems.

As of the moment of writing, I've found just few typos that were not that disturbing. I don't see any other more serious reason not to give solid 5 stars to this book.
17 of 19 people found the following review helpful
5.0 out of 5 stars One of the best text to learn machine learning 27 Sep 2012
By quant@inside - Published on Amazon.com
Format:Hardcover|Verified Purchase
I have read a similar book on Machine Learning, namely Pattern Recognition and Machine Learning (by Bishop). Before I read Barber's book, I considered Bishop's book to be the best in the Machine Learning (with bayesian focus). However, after reading this book, I can definitely say that it is better that Bishop's book in many sense. There are lots of examples in each chapter with matlab codes for many of them. Also, it covers more material that Bishop. Chapters on dynamic models and graphical models are particularly awesome. This book doesn't assume much background in probability (one master's level course on probability and statistics is probably more than enough). However, some chapters are advanced, and are mentioned so in the book.
9 of 11 people found the following review helpful
5.0 out of 5 stars A Truly Modern Introduction to Bayesian Reasoning and Machine Learning 18 Nov 2012
By Adnan Masood - Published on Amazon.com
Format:Hardcover
If you are scouring for an exploratory text in probabilistic reasoning, basic graph concepts, belief networks, graphical models, statistics for machine learning, learning inference, nave Bayes, Markov models and machine learning concepts, look no further. Barber has done a praiseworthy job in describing key concepts in probabilistic modeling and probabilistic aspects of machine learning. Don't let the size of this 700 page, 28 chapter long book intimidate you; it is surprisingly easy to follow and well formatted for the modern day reader.

With excellent follow ups in summary, code and exercises, Dr. David Barber a reader at University college London provides a thorough and contemporary primer in machine learning with Bayesian reasoning. Starting with probabilistic reasoning, author provides a refresher that the standard rules of probability are a consistent, logical way to reason with uncertainty. He proceeds to discuss the basic graph concepts and belief networks explaining how we can reason with certain or uncertain evidence using repeated application of Bayes' rule. Since belief network, a factorization of a distribution into conditional probabilities of variables dependent on parental variables, is a specific case of graphical models, the book leads us into the discipline of representing probability models graphically. Followed by efficient inference in trees and the junction tree, the text elucidates on key stages of moralization, triangularization, potential assignment, and message-passing.

I particularly enjoyed the follow up chapter called statistics for machine learning which uniquely discuss the classical univariate distributions including the exponential, Gamma, Beta, Gaussian and Poisson. It summarizes the measure of the difference between distributions, Kullback-Leibler divergence and states that Bayes' rule enables us to achieve parameter learning by translating a prior parameter belief into a posterior parameter belief based on observed data. Learning as inference, nave bayes, Learning with Hidden Variables and Bayesian model selection is followed by machine learning concepts. I found the sequence of chapters to be a bit off (shouldn't graphical models be discussed before providing a specific case?) but since the book is more rooted in practice than an exercise in theorem-proving's, the order ultimately makes sense.

Since Nearest neighbor methods are general classification methods, the book continues with conditional mixture of Gaussian into Unsupervised Linear Dimension Reduction, supervised linear dimension reduction, kernel extensions, Gaussian processes, mixture models, latent linear models, latent ability models, discrete and continuous state Markov models eventuating to distributed computation of models, sampling and the holy grail of Deterministic Approximate Inference.

One of the many great things about this book is the practical and code oriented approach; tips with applied insight like "Consistency methods such as loopy belief propagation can work extremely well when the structure of the distribution is close to a tree. These methods have been spectacularly successful in information theory and error correction." makes this text distinguished and indispensable.
Were these reviews helpful?   Let us know
Search Customer Reviews
Only search this product's reviews
ARRAY(0xaf2bb84c)

Customer Discussions

This product's forum
Discussion Replies Latest Post
No discussions yet

Ask questions, Share opinions, Gain insight
Start a new discussion
Topic:
First post:
Prompts for sign-in
 

Search Customer Discussions
Search all Amazon discussions
   


Look for similar items by category


Feedback