I highly recommend this book to two audiences: (a) instructors looking to construct a strong course on "introduction to social science statistics" from a Bayesian perspective; and (b) social science researchers who have been educated in a classical framework and wish to learn the foundational knowledge of a Bayesian approach, without a refresher in differential calculus. (I expect it would also of interest to many physical science and engineering researchers whose methods are not highly divergent from social science (e.g., biologists, operations engineers) but I can't speak authoritatively about that.)
I'm a practicing social science researcher and have wanted for years to learn Bayesian methods deeply - I've used them in applied settings but without complete understanding. My quest to learn Bayesian methods more rigorously has been persistently stymied by texts that demand analytic solutions to prior/posterior estimation, that are excruciatingly focused on specific problems with little attention to generalization, or that skip huge areas of exposition to leap from a toy problem to a complex one with little clue of the path between them. Dr. Kruschke's text avoids all of those problems. It is remarkable for building intuition from basic principles, for avoiding page-after-page of integrals, and for having extremely clear application.
The book starts by laying out the core intuitions of Bayes's rule - instead of merely stating it (and don't we all think we know it by now?), it leads the reader through some applied examples with frequency tables. Simple? Yes; but also valuable to force oneself through. It then builds upon this knowledge systematically, going through the requisite coin toss examples - but unlike most texts, connecting them clearly to real-world examples of binomial problems. And it proceeds from there, ending up with Bayesian versions of ANOVA-type problems and logistic regression.
There are two other salient and important features of the book. First, the exercises are particularly well-chosen to reinforce the key points and demonstrate applications. I strongly recommend to work your way through them. In my case, for instance, they forced me to confront understanding of things like the "prior likelihood of the data" - a core concept that I thought I understood but really didn't until I had to solve some actual problems.
Second, the book is closely linked to the R statistics environment - surely the most popular tool used by Bayesian statisticians - and has sample programs that are illustrative, useful, and actually work. If you do Bayesian work, you're probably going to use R, and these examples will help immensely to build the set of tools you'll need.
Finally, and just to make clear, I have a disrecommendation for one audience: if you're looking for a highly mathematical treatment of Bayesian methods, it is not the right book. It is a didactic text, not a reference manual or set of derivations.
Good luck to you as a reader, and thank you to the author!