- Buy this product and stream 90 days of Amazon Music Unlimited for free. E-mail after purchase. Conditions apply. Learn more
Your Code as a Crime Scene: Use Forensic Techniques to Arrest Defects, Bottlenecks, and Bad Design in Your Programs (The Pragmatic Programmers) Paperback – 9 Apr 2015
|New from||Used from|
- Choose from over 13,000 locations across the UK
- Prime members get unlimited deliveries at no additional cost
- Find your preferred location and add it to your address book
- Dispatch to this address when you check out
Special offers and product promotions
Frequently bought together
Customers who viewed this item also viewed
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
About the Author
Adam Tornhill combines degrees in engineering and psychology to get a different perspective on software. He works as an architect and programmer and also writes open-source software in a variety of programming languages. He's the author of the popular book Lisp for the Web and has self-published a book on Patterns in C. Other interests include modern history, music, and martial arts.
Customers who bought this item also bought
There was a problem filtering reviews right now. Please try again later.
I have one massive problem against this book thought which is that the author uses this book to basically sell you his software, which is a bit of a sleazy move "Oh you've bought and read my book!? now go buy my software to put these ideas into action"
By using his knowledge from psychology he also helps us understand how our informal organisational structure and team structure shapes our system. This knowledge is key to create an environment where we can grow solutions organically and move away from legacy systems.
On the whole this is a very good read with extraordinary techniques which will help most programmers and architects tackle legacy systems. Combined with Michel M Feather's Working Effectively with Legacy Code this makes such work more pleasure then pain.
If you are working in a large code base or a large team and you care about the efficiency of the code design, this is an excellent book for you. The book is not about how to design an architecture for your software, nor is it about demonstrating a perfect design pattern, but it is about shifting your focus on finding defects you often neglect or don't even think is there.
If you think you have a perfect design, but you can't figure why features take more time to implement than it used to, more and more people are involved in implementing a simple feature and bugs are hard to identify, you better think twice about that perfect design and read this book.
Most helpful customer reviews on Amazon.com
There are numerous problems with the book though:
1. The toolsets he uses as example in the book are no longer available as binaries. You have to go through searches and download compilers for obscure languages (Clojure? Really? WTF?), plus all of their dependencies, just to get his main app to work in order to follow along with examples (the compiler had to be compiled!) Those are hours of my life I'll never get back. Why put every reader though this kind of nonsense? Publish the binaries and move on...
2. As another reviewer mentioned, the correlations he draws between code history and psychological phenomena are tenuous at best. Still, there are some interesting side-stories.
3. Most of the metrics he measures are absurdly simple. Ultimately I believe that the majority of the kinds of conclusions you will draw from his analysis are already understood (perhaps tacitly) by most everyone on your team(s). For example, if you want to know where the trouble spots in your code base are (hotspots), just about any developer can tell you - you don't need metrics to point you in the right direction. 90% of time the metrics will tell you what you already know.
4. The book constantly refers back to itself (remember when we went over XYZ 3 pages ago? - yes I do), and it gets annoying. For such a short book, these constant back-references are unnecessary. The writing style, while generally breezy, is also a bit repetitive - it's a 182 page book that could have been written in 50 or less.
These criticisms aside, the book is short enough to get through in a day, and you may pick up a few gems along the way. I really liked the trend patterns his discusses, for example, and there are numerous references to other reading that can be helpful.
The author attempts to draw various vague analogies between forensics analysis and software development. While I think his considerations of the social aspect of development are sane and valid, I found his comparisons to be mostly contrived simplifications. His psycology-based insights, which he claims to be a selling point, often involve large gaps in logic (for a technical book at least). For example, he tries to compare serial killers and bugs...hopeful you can think for yourself why this might be a silly analogy. One might argue that I am being too rigid, but I would say for a profession like software engineering, which is rooted in logic, the author's lack of clear hueristic reasoning is very uncomforting. He often cites some research paper in social/cognitive psychology, and then proceeds to use those principals as a foundation for his methods, without any evidence or data to back his claims. I believe he could benefit from interrogating his assumption much more rigoursly.
Further more, I found most of his "analysis" techniques to be disgustingly primitive. Almost all of them involve extracting some immediately available value from a VCS, and then literally just looking at minimums and maximums. Another troubling observation: his approach to dealing with time series data (i.e. commit history) seems to be "hey it's too complicated, let's just subset the it to some fixed interval". While I will admit this seems to be a somewhat decent approach, it effectively throws away all information outside of the interval. That may be what you want, but sometimes it is not and the author appears to make no consideration of this information loss. Another red flag: he doesn't normalize his data when appropriate (revisions frequency for example). Comon dude, seriously!! That's elementary statistics!! I myself am by no means an expert statistician, so it's troubling that even someone with my limited knowledge can spot these errors.
Not saying it's a terrible book, as it is a decent read. But certainly take what he says with a grain of salt.