Algorithms of the Intelligent Web Paperback – 8 Jul 2009
- Choose from over 13,000 locations across the UK
- Prime members get unlimited deliveries at no additional cost
- Find your preferred location and add it to your address book
- Dispatch to this address when you check out
Customers Who Bought This Item Also Bought
Enter your mobile number below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
Getting the download link through email is temporarily not available. Please check back later.
To get the free app, enter your mobile phone number.
About the Author
Dr. Haralambos Marmanis
holds a Ph.D. in applied mathematics from Brown
University, an M.S. degree in theoretical and applied mechanics from the
University of Illinois at Urbana-Champaign, and B.S. and M.S. degrees in civil
engineering from the Aristotle University of Thessaloniki in Greece. He was the
recipient of the Sigma Xi award for innovative research in 2000, and he is the
author of numerous publications in peer-reviewed international scientific journals,
conferences, and technical periodicals.
Dmitry Babenko is the lead for the data warehouse infrastructure at Emptoris,
Inc. He is a software engineer and architect with 13 years of experience in the IT
industry. He has designed and built a wide variety of applications and infrastructure
frameworks for banking, insurance, supply-chain management, and business
intelligence companies. He received a M.S. degree in computer science from
Belarussian State University of Informatics and Radioelectronics.
Most Helpful Customer Reviews on Amazon.com (beta)
I have read the book front-to-back (twice!) before I write this report. I started reading the electronic version a couple of months ago and read the paper print again over the weekend. This is the best practical book in machine learning that you can buy today -- period. All the examples are written in Java and all algorithms are explained in plain English. The writing style is superb! The book was written by one author (Marmanis) while the other one (Babenko) contributed in the source code, so there are no gaps in the narrative; it is engaging, pleasant, and fluent. The author leads the reader from the very introductory concepts to some fairly advanced topics. Some of the topics are covered in the book and some are left as an exercise at the end of each chapter (there is a "To Do" section, which was a wonderful idea!). I did not like some of the figures (they were probably made by the authors not an artist) but this was only a minor aesthetic inconvenience.
The book covers four cornerstones of machine learning and intelligence, i.e. intelligent search, recommendations, clustering, and classification. It also covers a subject that today you can find only in the academic literature, i.e. combination techniques. Combination techniques are very powerful and although the author presents the techniques in the context of classifiers, it is clear that the same can be done for ecommendations -- as the Bell Korr team did for the Netflix prize.
I work in a financial company and a number of people that I work with have PhD degrees in mathematics and computer science. I found the book so fascinating that I asked them to have a look. They had nothing but praise for this book. The consensus is that everything is explained in the simplest possible way, with clarity but without sacrificing accuracy. As one of them told me, this is a major step forward in teaching AI techniques and introducing the field to millions of developers around the world. Even for experts in the field and experienced software engineers, there are important insights in almost every chapter.
We had tried to write a software library, for a small project, that analyzes log files and assesses IT risk (e.g. probability of intrusion; preemptive alerts on application performance issues, and so on) based on Segaran's book "Programming collective intelligence". We spend about six weeks trying to find how to match what was in Segaran's book and what we wanted to do but we did not find the depth and clarity that was required. On top of that, Segaran used Python so the code had to be rewritten and things didn't quite work as expected! We are now using the code from Marmanis' book and our code analyzes apache and weblogic log files in order to assess risk! It just works! We wrote the code in one week! We would not have been able to succeed without reading this book.
Clearly, I am deeply impressed. This is an outstanding book; it was not just useful, it was inspiring! It is a "must have" book for every Java developer.
The content of the book includes:
* the PageRank algorithm; a content based algorithm similar to PageRank to which the author coined the term "DocRank" because it applies to Word, PDF, and other documents rather than Web pages; search improvements based on probabilistic methods (Naive Bayes); precision, recall, F1-score, and ROC curves;
* collaborative filtering as well as content based recommendations;
* k-means, ROCK, DBSCAN for clustering; the best explanation about the "curse of dimensionality" ever! I finally learned what this mystic term means!
* Bayesian classification; declarative programming (through the Drools rules engine); introduction to neural networks; decision trees
* Comparing and Combining classifiers: McNemar's test; Cochran'sQ test; F-test; Bagging; Boosting; general classifier ensembles
Buy it, read it, enjoy it, and use it!
First of all, the code uses BeanShell as a way to run the examples. BeanShell is a neat idea. It's one of a number of languages that move Java closer to being a scripting language. But it's not necessary for the book's purposes. It's a bit of a pain to install, and it takes a while to get used to. In the end it's an unnecessary distraction. It's far simpler to run the examples in eclipse with the "scripts" entered as the body of a main() method.
The preceding is a relatively minor point, but in some ways it illustrates some of the problems I had with the book. It focuses too much on the code. Yes, it's nice to have code that does what one is trying to describe, but code is not a substitute for a good explanation. In many places the book provides inadequate descriptions of the concepts, presumably on the grounds that one can just read the code. But code is not tutorial. Code itself must be commented to be understandable. And code cannot replace a good intuitive description of the important ideas.
Furthermore, the code (and the output) take up too much space in the book. There are pages of output when a few lines would suffice, and there are pages of code when a well-constructed paragraph would do. Pearson's coefficient is a good example. There is approximately a page of code to do the calucuation. There is also half a page of code-level comments--e.g., "The method getAverage is self-explanatory; it calculates the average of the vector that's provided as an argument." But there is no straightforward description of what's going on in the computation as a whole.
Further, there are frequent references to other books and papers as if making such references excuse the author from explaining an idea. For example, the Pearson's coefficient discussion includes this sentence. "There's a smarter way to do this that avoids a plague of numerical calculations called the roundoff error; read the article on the corrected two-pass algorithm by Chan, Golub, and LeVeque." That's all that's said about roundoff error or the smarter way to do something or other numerical computation issues. Referring the reader to an article is not good enough. If it's worth discussing, discuss it. If it's not important enough to discuss, then don't refer the reader elsewhere except as enrichment. The author does this over and over.
Another example of what I would consider the book's conceptual superficiality is its treatment of Bayes' Theorem. Bayes' Theorem is mentioned many times, but the only explanation is a half page translation of Bayes' Theorem into words--with no explanation of why Bayes' Theorem is true. Understanding why Bayes' Theorem is true should have been an important lesson.
A similar criticism holds for Decision Trees except even more so. There is no discussion at all about how to construct a decision tree. Such a discussion would have been a perfect place to introduce the notion of entropy. But the word "entropy" doesn't even appear in the book.
All-in-all I found the book disappointing. If one wants to build software that performs some of the functions discussed, the book can help. But if one wants to understand the principles underlying such software, the book is not the right place to go.
The author is attempting to teach both the algorithms behind the information retrieval that is done on the web and at the same time show those algorithms implemented in Java in such a way that it is clear to the reader what has been done. This approach can be a tricky middle ground often resulting in books that are confusing from both a textbook and from a cookbook standpoint. Fortunately, the author has done a good job of integrating these two viewpoints into a cohesive whole and the result is a book I can heartily recommend. The author makes liberal use of figures and explains what is being done at a high level first, showing pseudocode before actually showing the Java code. Discussions on the inner workings of the algorithms follow.
Note that use is made of higher level libraries such as Lucene when they are available, because this is a book for professionals after all, and your boss would not be pleased if you reinvented the wheel every time you implemented an algorithm. But, don't worry, the explanation behind the code is there too. Another good book that is language agnostic that makes a good companion to this one is Machine Learning (Mcgraw-Hill International Edit). It is an oldie but a goodie.
The product description does not yet show the table of contents so I do that next:
Chapter 1. What is the intelligent web?
Section 1.1. Examples of intelligent web applications
Section 1.2. Basic elements of intelligent applications
Section 1.3. What applications can benefit from intelligence?
Section 1.4. How can I build intelligence in my own application?
Section 1.5. Machine learning, data mining, and all that
Section 1.6. Eight fallacies of intelligent applications
Section 1.7. Summary
Chapter 2. Searching
Section 2.1. Searching with Lucene
Section 2.2. Why search beyond indexing?
Section 2.3. Improving search results based on link analysis
Section 2.4. Improving search results based on user clicks
Section 2.5. Ranking Word, PDF, and other documents without links
Section 2.6. Large-scale implementation issues
Section 2.7. Is what you got what you want? Precision and recall
Section 2.8. Summary
Section 2.9. To do
Chapter 3. Creating suggestions and recommendations
Section 3.1. An online music store: the basic concepts
Section 3.2. How do recommendation engines work?
Section 3.3. Recommending friends, articles, and news stories
Section 3.4. Recommending movies on a site such as[...]
Section 3.5. Large-scale implementation and evaluation issues
Section 3.6. Summary
Section 3.7. To Do
Chapter 4. Clustering: grouping things together
Section 4.1. The need for clustering
Section 4.2. An overview of clustering algorithms
Section 4.3. Link-based algorithms
Section 4.4. The k-means algorithm
Section 4.5. Robust Clustering Using Links (ROCK)
Section 4.6. DBSCAN
Section 4.7. Clustering issues in very large datasets
Section 4.8. Summary
Section 4.9. To Do
Chapter 5. Classification: placing things where they belong
Section 5.1. The need for classification
Section 5.2. An overview of classifiers
Section 5.3. Automatic categorization of emails and spam filtering
Section 5.4. Fraud detection with neural networks
Section 5.5. Are your results credible?
Section 5.6. Classification with very large datasets
Section 5.7. Summary
Section 5.8. To do
Books and articles
Chapter 6. Combining classifiers
Section 6.1. Credit worthiness: a case study for combining classifiers
Section 6.2. Credit evaluation with a single classifier
Section 6.3. Comparing multiple classifiers on the same data
Section 6.4. Bagging: bootstrap aggregating
Section 6.5. Boosting: an iterative improvement approach
Section 6.6. Summary
Section 6.7. To Do
Chapter 7. Putting it all together: an intelligent news portal
Section 7.1. An overview of the functionality
Section 7.2. Getting and cleansing content
Section 7.3. Searching for news stories
Section 7.4. Assigning news categories
Section 7.5. Building news groups with the NewsProcessor class
Section 7.6. Dynamic content based on the user's ratings
Section 7.7. Summary
Section 7.8. To do
Appendix A. Introduction to BeanShell
Section A.1. What is BeanShell?
Section A.2. Why use BeanShell?
Section A.3. Running BeanShell
Appendix B. Web crawling
Section B.1. An overview of crawler components
Appendix C. Mathematical refresher
Section C.1. Vectors and matrices
Section C.2. Measuring distances
Section C.3. Advanced matrix methods
Appendix D. Natural language processing
Appendix E. Neural networks
I have been working with these kinds of problems for several decades now and this is one of the best books I've come across. It is particularly relevant to the problems that are typically faced by web application developers in the Web 2.0 era.