Top critical review
10 people found this helpful
Of the many variables that go into making a good software developer hire
on 16 August 2015
This book is just contains an unacceptably large number of problems - errors in the solutions (code and descriptions, sometimes both in the same exercise!), sometimes plain poorly written code, bad/missing assumptions and more. I noticed that the 5th Edition has a dedicated Errata spreadsheet online. Seriously, a spreadsheet and not a cloud issue tracker! Doesn't that tell you something - hell, I learnt not to conflate spreadsheets and databases in the 80s.
It actually confirms everything I've concluded about coding interviews in principle (from experience on both sides of the table) - they're ineffective, artificial, badly designed and myopic. Of the many variables that go into making a good software developer hire, coding interviews ignore almost all of them and create a false sense of effectiveness. Coding tests are great at filtering through people who are good at taking coding tests - that's all; they don't filter out the crazies, the lazies, the sloppies, the messies, the academics, the unwise and so on. When you want to hire an accountant, do you give them an accountancy test? Would you test someone who's been in the industry for decades on material that they only have encountered decades ago? Does that mean that they couldn't utilize that material in a normal context (not under a timer with potential employers literally breathing down your neck)? Would you really not look-up a breadth-first search algorithm online in real-life, rather expecting to do it from memory?
I blame Microsoft for this disease that infected the software engineering industry - in the 90s, their bureaucracy created bad ideas and the industry adopted them on the basis that the amount of $$$ they were making must imply the ideas are good. Of course, common sense dictates their profitability came from their first mover advantage, predatory business practices and monopoly position, and we now know that for decades they've been an astoundingly dysfunctional bureaucracy, which explains the need for this type of book to this day.
There are other types of test that are more sensible and they're designed around the idea of "do you understand this code", not "can you write this code". There are issues with those tests too, as they can focus on ridiculously obscure aspects of languages that any developer with decent practices would avoid almost completely (and are often because of the poor design of the language itself - C++, I'm looking at you). I would use the word 'myopic' again - not only the obscurity but consider 'languages' - it's more important to test knowledge, understanding and adoption of good principles. Also, in some contexts the language is less important than the platform or the frameworks. To me, software engineering is a craft, not a science.
In my experience, the only variables that correlate to performance are the strength of the references (do this first before any interviews to avoid confirmation biases) and past projects (code samples, being able to discuss them in depth). Hammer away at these to get to a sense of the truth. Trial periods with dedicated tasks is an excellent approach, too.
The irony of this book is that it indicates the author would be a bad hire - the number of errors and issues is such that it displays a lack of dogfooding, poor testing, would waste company $$ on pointless exercises, shows lack of thoughtfulness into the subject matter and requires numerous editions to get it to a state that's still problematic. Still, if you want that job and need to dance the corporate dance, this book is as good a guide as any.