18 von 20 Kunden fanden die folgende Rezension hilfreich
- Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
In the Preface, the author states that there are many good books on statistical thermodynamics, and that this is not a textbook on this subject. However, though few other books exist that make use of the concept of "information" in statistical thermodynamics, this seems to be the first one that bases the full construction of the theory upon the information. For this reason, the approach by Arieh Ben-Naim is really modern and deserves a careful reading.
Personally, I don't think that this book can not be used as a textbook. Indeed, it is quite self consistent and builds step by step the core of the theory in such a way that any student is able to follow all arguments. Actually, it is true that it does not contain everything, but what is the textbook that really contains everything? May be, the only unpleasant thing for a student is the non negligible amount of time spent commenting the differences with other well known references, in particular the explanations of the probable reasons why Gibbs did not reach the very same results. But comments like these would be a valuable resource for teachers, on the other hand.
If I had to choose one thing in this book, I would recommend to enjoy the derivation of the Sackur-Tetrode equation (chapter 4): it is really beautiful and does not have the "shadows" that classical derivations suffer. For the very first time, I should say, I think I have understood it, thanks to this book.
However, the most important point of the whole book, the real starting point of the full construction, is the following. Shannon's measure H of the missing information (MI) is a more general concept than the entropy S, that is a thermodynamic quantity defined _only_ for equilibrium states: H can be defined for _any_ probability distribution and it comes out that H=S for the equilibrium state. Hence, though in thermodynamics, only changes of S between different equilibrium states are defined, one can make use of the properties of H to perform derivations in a more general context. The results, when applied to thermodynamic equilibrium states, will be also valid for the entropy S.
As the author explained in his introductory book Entropy Demystified: The Second Law Reduced to Plain Common Sense, the fact that S can only increase is an experimental observation, whose explanation is provided by a framework in which matter is discrete and composed by intrinsically indistinguishable particles, with the postulate of equally probable microscopic states and the postulate that the system will be found more often in macroscopic (or "dim") states with higher probability (the latter being the sum of the probabilities of all practically indistinguishable microscopic states, under the assumption that they are all independent).
What it is known as "thermodynamic equilibrium state" is really a set of "dim states" (or "macro-states", following Gibbs) for which the measurable quantities (that are inherently macroscopic) differ only by negligible amounts, so that they are practically (though not in principle) indistinguishable. In turn, these dim states are (in principle) different because they contain all "micro-states" (or "specific states", for Ben-Naim) that features the very same values of the observable quantities. In the assumption that all microscopic states are equally probable (dating back to Boltzmann and fully used by Gibbs), it turns out that the macro-states containing more micro-states are more probable, so that the system will spend more time on them. The family of macro-states around the macro-state with the maximum number of micro-states is what it is called thermodynamic equilibrium state.
The entropy S is defined _only_ for this set of macro-states, that is for the equilibrium state. However, the Shannon's (missing) information theory is defined for _each_ individual macro-state. That is why H is more general than S. By following these arguments, the full theory of statistical thermodynamics can be built, as you will find in the book by Arieh Ben-Naim.
18 von 21 Kunden fanden die folgende Rezension hilfreich
- Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
In the Introduction, Ben-Naim greets us with a teaser: those of us who think that the (ideal) entropy of mixing is positive, have a problem. We should take our medication and read this book till we come to our senses and realize that the entropy of mixing is zero. But first things, first.
Ben-Naim uses Shannon's entropy (information) to re-interpret statistical thermodynamics. This has been done before, most notably by Jaynes, who is quoted throughout the book. Ben-Naim, however, goes further to argue that all other interpretations of entropy are wrong. The very term, "entropy,'' he argues, is part of the problem: it means nothing and should be abandoned. Even "enfometry'' or "average surpisal" would be more meaningful terms. From here on the book goes back and forth between being a technical presentation of information theory, or a repetitive litany of arguments on semantics. So, what about the entropy of mixing? The argument is laid out in section 6.7. It is long and somewhat rambling ("we already feel in our bones that some information has been lost forever in the process...'' p279) but it goes something like this:
Consider the classical mixing experiment, a box divided into two parts, each filled with a different ideal gas. Remove the partition and let the system equilibrate. Ben-Naim argues that the corresponding increase of entropy arises, not from mixing, but from the expansion of the gases (each gas has more volume to roam). Mixing is entirely incidental, he argues, as relevant to this process as the shape of the container into which the gases expand. If we compress the mixed gases isothermally to half the volume of the mixture, its entropy would become the same as that of the pure gases before mixing. Ergo, the entropy of mixing is zero! What the rest of us call "entropy of mixing,'' Ben-Naim continues, is a bad application of a bad term: "naming a quantity "entropy of mixing'' is more than naming a person or a thing by an arbitrarily chosen name. It is both inappropriate and potentially misleading (p274)''. We wouldn't call it "entropy of squaring" if gases happen to expand in a square vessel, why then call it "entropy of mixing" if gases happen to mix as they expand? Thus goes the argument. None of this requires information theory, by the way. It can be argued by undergraduate classical thermodynamics.
Does this view advance our understanding of nature? Is mixing as incidental to the expansion experiment as Ben-Naim argues? Suppose it is. Then, we should be able to reverse the state of the expanded mixture by compression. But that is not possible: the molecules of gas A would have to be compressed to the left half of the box, the molecules of gas B to the right, and this cannot be done because we don't know where the molecules are.
It is a pity that these irrelevant arguments undermine the premise of the book, which is (ought to be) to argue for an information theory of thermodynamics. Fortunately, this was done fifty years ago by Jaynes in his classic paper, Information Theory and Statistical Mechanics, Physical Review, vol. 106, p. 620, 1957, also included in the collection E.T. Jaynes: Papers on Probability, Statistics and Statistical Physics (Synthese Library).