TL;DR summary of the review - awesome book. If you work with real-world datasets, or you work with people who do, you owe it to yourself to read this book. I wish it had been around 8 years earlier when I started working with large-scale social sciences census data. All of the fun, and all of the pain, of dealing with government data and social sciences data is particularly true for census information.
Much of the book could be summed up as noting that less-than-perfect data is still very useful, but you need to understand how the data is bad - is it random? What kinds of bias are introduced, if any? What impact will that have on your conclusions? Go get your hands dirty with the data itself - go look at a few hundred records in a text editor to see what you've got. You'll want to test the data all through your analysis, to ensure that you can identify both where you're hitting issues and where you're introducing issues yourself, and you'll be happier if you can automate these tests so that you can run them often without creating a burden for yourself. Prefer simple tools and portable file formats - in particular, Excel is not your friend. The book discusses a number of different case studies and anecdotes for dealing with data that has problems of one flavor or another. The authors have been there before and you can learn from their experience.
Discussions of social sciences survey data and its inherent imperfections and messy metadata definitely rang true with my experiences dealing with census data, as did the chapter on the lowly, undervalued flat file as a data structure.
I'll summarize three takeaway messages that resonated for my own experience:
1. It's generally easy to do some basic analysis of your data to look for problems, gaps, inconsistencies, unusual distributions; and doing so will give you insight into what you're dealing with. Going through your actual data file, rather than trusting the metadata and documentation, is the only way to really know what sort of issues are lying in wait.
2. There's lots of interesting data that's structured for human consumption rather than machine-driven analysis. Restructuring it to be in a format that's more amenable for machine analysis can be tedious, but it's also automatable. Rather than converting a huge list of documents by hand, write some code to restructure it. This notion is explored in chapter 2, where the code is in the R stats language. R is is a good fit for two-dimensional data such as tables, rather than the base unix tools (perl, sed, awk) which tend to be line-oriented. However, there's nothing here that can't be done in awk too. Don't shy away from writing code to transform data into something useful, and expect that to be an iterative process.
3. Oftentimes, "plain text" files are anything but. You can find "plan text" files that are ASCII, or UTF-8, or ISO-8859, or CP-1252, all of which will look the same until you start to run into non-English characters. I've seen this in dealing with internationally-sourced data, or even US data that includes Puerto Rico. The authors provides some guidance about how to deal with this in chapter 4, but more importantly, they discuss the fact that it's a surprisingly and frustratingly complex problem that you need to be aware of. Another issue is that when looking at data generated from a web app, you may find text that's been encoded or escaped to avoid SQL injection or cross-site scripting attacks. These are web app best practices, and it's generally easy to get it back to plain text once you know what you're looking at. The author gives code samples in python, which has strong library support for text transformation, but the main point is to see how to identify these kinds of problems with your input data.
My only negatives are that, as a collection of individual essays, the writing style and tone tends to be all over the map. All in all, this is a book that I enjoyed reading, and have recommended to other software developers starting to work with data scientists.