Confusions over Reduction and Emergence
in the Physics of Phase Transitions

John D. Norton
Center for Philosophy of Science
Department of History and Philosophy of Science
University of Pittsburgh
This page at

This note draws on and extends work in:
John D. Norton, "Approximation and Idealization: Why the Difference Matters" Philosophy of Science, 79 (2012), pp. 207-232.
John D. Norton, "Infinite Idealizations," Prepared for Vienna Circle Institute Yearbook (Springer: Dordrecht-Heidelberg-London-New York), Section 4.


There is an enduring confusion underway in the philosophy of statistical physics concerning phase transition. One group finds phase transitions to be an instance of a successful reduction. Another finds them to be a clear case of emergence, where emergence is normally understood to contradict reduction. Both cannot be right, it would seem. Yet both groups have a quite solid grasp of reduction and emergence in the philosophy literature and of the physics of phase transitions.

How can this be?

Both groups are right.

What is being overlooked is that the terms reduction and emergence always relate different levels of description. The apparently conflicting claims of reduction and emergence are compatible after all, since they relate different levels.

• The successful claim of reduction pertains to the levels of thermodynamic and statistical mechanical description of thermal systems. An augmented Nagel-style reduction obtains between them.

• The successful claim of emergence pertains to levels of few component and many component descriptions of thermal systems. Phase transitions are emergent when we proceed from the few component to the many component level of description.

See Jeremy Butterfield 2011. “Less is Different: Emergence and Reduction Reconciled.” Foundations of Physics. 41(2011),pp. 1065-1135.

Here I concur with Jeremy Butterfield's contention that we can have both reduction and emergence in the case of phase transitions. The phase transition properties emerge "before" or "on the way to" the limit when we are still considering systems finitely many components in the context of a Nagel style reduction between thermodynamic and statistical descriptions.

What I add to Butterfield's analysis is that the reconciliation must take note of the multiple senses of levels of invoked and the fact that the parties in contention refer to different levels. One sense of level arises when we demarcate scientific knowledge into theories, defined as logically closed sets of sentences, after the older manner of philosophers. The other arises when we divide our scientific knowledge according to scale, as do physicists. Differences of component number provide the relevant scale for demarcating condensed matter physics.

Lest there be confusion: I do not see any grounds for saying that one reading is more correct than the other. They are just different.

The Three Levels

Here are the three levels. The first two levels reside in one realm of description:

The individual components are molecules, spins, etc.

A. Molecular-Statistical Description.
The thermal system is described by phase space formed by the canonical positions and momenta of individual components. The basic quantities at this level are functions on the phase space. They include the Hamiltonian, the canonical probability distribution and the partition function. The canonical entropy, free energy and other similar quantities are recovered through operations on the partition function.

A1. Few Component Molecular-Statistical Level.
This level describes the behavior of just one or a few of the components. In the narrowest case, it is limited to the phase space of just those few components.

A2. Many Component Molecular-Statistic Level.
This level describes the behavior of the totality of the very many components in the system. It includes the description of the system over the full phase space of all components.

B. Thermodynamic Level.
The thermal system is described by a state space formed by the macroscopic state variables such as pressure, volume, temperature and density. Thermodynamic properties such as internal energy, free energy and entropy are functions on the state space.

Where Reduction Succeeds:

Ernest Nagel, The Structure of Scientific Theories: Problems in the Logic of Scientific
. New York: Harcourt, Brace & World, Inc.1961. Ch.11.

In philosophy of science, the venerable notion of reduction was defined by Nagel. According to it, we would say that statistical mechanics reduces thermodynamics if we can deduce thermodynamics from statistical mechanics. Nagel's definition proved too strict for real use. One can rarely deduce exactly the results of a higher level theory from a lower level one. For example, one cannot deduce from a statistical mechanical analysis that the entropy of a closed thermal system never decreases, as the thermodynamic level requires. One can only conclude that it is very probably so.

Kenneth F. Schaffner,, “Approaches to Reduction,” Philosophy of Science, 34 (1967), pp. 137- 147.

The Nagel account can readily be augmented to accommodate such complications. We require that the reducing theory entail a suitably close surrogate theory for the reduced theory. Schaffner has elaborated such an augmentation.

The ingenious and powerful methods of renormalization group theory have provided a successful reduction of the thermodynamics of phase transitions in this augmented Nagel sense.

The renormalization group methods proceed at the molecular-statistical level A2 of many components. The formal setting is the phase space of the entire system and the canonical properties recovered for it from the standard analysis. They then take reduced Hamiltonians to form a new space in which the renormalization group flow proceeds. From it, they infer various thermodynamic quantities, including the critical exponents that attach to various universality classes.

The details of this analysis are massive and even intimidating. However the broad outline is not. The analysis proceeds in the many component molecular-statistical level A2 and derives a surrogate of the results of the thermodynamical level B. It is a successful Nagel-style reduction.

There is more to say, but it is all at the level of lesser distractions.

First, much has been made of the fact that renormalization group methods routinely take the limit of the number of components to infinity. Many authors, including Butterfield and me, have argued that the state of the fictitious system with infinite components--what ever that may be--is not what is wanted. What matters is the state of the system with very many, but still finitely many components.

This has been argued at some length in my "Approximation and Idealization..."

We do generate singularities in taking these infinite limits and they may be useful in some formal analyses. However the systems of infinitely many components are at best convenient adjuncts. At worst they are dangerously irrelevant, for they can bear properties radically different from the real thermal systems modeled; or may even be required to carry an inconsistent set of properties.

Second, the Nagel model of reduction is now showing its age. It was formulated at time when relations among theories were explicated largely or even solely in terms of deductive relations between propositions. There are more useful notions of reduction. Broadly I divide them into two classes. The first is ontological reduction, in which we find that the processes of one level are nothing but processes at another level, without assuming anything about how the theories prevailing in one level may entail results in the theories of the other level. The second is explanatory reduction, in which we find that the theory of processes at one level can provide explanations of processes at another level. This sense is more demanding than Nagel's. In some case, a deduction of one theory from another may be possible in principle, but the actual derivation might be so burdensome as to be useless for purposes of explanation.

Where Emergence Succeeds:
Few Components-Many Components

The notion of emergence has proven quite recalcitrant to precise characterization. The key notion, however, seems to be one of unexpected appearance of properties when one proceeds from one level to another.

That key notion arises in the analysis of phase transitions when we proceed from the level A1 of few components in the molecular-statistical analysis to A2 of many components in the molecular-statistical analysis. The most important qualitative result of recent work in phase transitions is that they can only be treated well if we move from the few to the many component level.

Another striking example of this sort of emergence is provided by the singlet state in quantum mechanics. It is, loosely speaking, a state consisting of two particles. However, specifying the state of each particle individually fails to capture the distinctive relations between the two particles that distinguishes the singlet state from, say, two uncorrelated particles or two particles in a triplet state.

We cannot recover a proper analysis of phase transitions merely by looking at the behavior of a few components. That was the lesson of the failure of the mean field theory approach. It considered only few components and sought to represent the many remaining components by the mean field they generated. Since critical phenomena in phase transitions are inherently fluctuation phenomena, what matters is not the mean field, but the deviations from it that comprise the critical phenomena.

Hence, renormalization group methods succeed because they consider the totality of components present and do not seek to recover the properties of the totality by simply scaling up the properties of just a few components.

Phase transition phenomena are emergent in this transition from the few to the many.It is generally assumed that emergence betokens a failure of reduction. However the definitions of reduction and emergence are sufficiently loose and varied that no such connection can be assured.

In this case, the emergence of phase transition phenomena does also comprise a failure of reduction in the augmented Nagel sense. For one cannot deduce even in surrogate the interactions of very many components undergoing phase transition from the properties of just a few of them taken in isolation.

On Few-Many Emergence

The notion of few-many emergence indicated here is not one that has attracted much attention in the celebrated case studies, such as the question of the emergence of mind from brain or genes from DNA. However it has a quiet but persistent presence.

Its possibility has always been present in the accounts of emergence extending back to the nineteenth century, when the notion of emergence was first developing. Here's C. D. Broad introducing the "Theory of Emergence":

C. D. Broad, The Mind and Its Place in Nature, London: Routledge & Kegan Paul, 1925. Ch. 2.

"On the first form of the theory the characteristic behaviour of the whole could not, even in theory, be deduced from the most complete knowledge of the behaviour of its components, taken separately or in other combinations, and of their proportions and arrangements in this whole. This alternative, which I have roughly outlined and shall soon discuss in detail, is what I understand by the "Theory of Emergence." ... the characteristic behaviour of Common Salt cannot be deduced from the most complete knowledge of the properties of Sodium in isolation; or of Chlorine in isolation; or of other compounds of Sodium, such as Sodium Sulphate, and of other compounds of Chlorine, such as Silver Chloride."

The key element is the introduction of a level in which the components are "taken separately" or "in isolation." That is the level of the few.

The second part of Nagel's celebrated chapter that discusses reduction gives a treatment of emergence. He considers in some abstraction an object composed of elements. Emergence of a property of the object arises when we cannot deduce the property from those of the elements taken "in isolation." (p.367, reference above) The scare quotes around "in isolation" are Nagel's.

P. W. Anderson, P.W. . "More is Different". Science 177 (4047)(1972): 393–396. On p. 393.

A canonical source for the notion of emergence in physics is Anderson's celebrated "More is Different" of 1972. Symmetries abound within few component systems in physics. The pyramidal ammonia molecule can invert by quantum tunneling. The transformation is something like an umbrella turning inside out in a windstorm. This is a kind of symmetry in that the system is free to move between the two states, the original and the inverted. These symmetries do not persist when we have many components; they are broken. A large molecule sugar cannot spontaneously transform to its stereoisomer, which would be the analog of the inversion of the small ammonia molecule. The general moral seems to be:

The constructionist hypothesis is defined implicity in the remark:
"...the reductionist hypothesis does not by any means imply a "constructionist" one: the ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe."

"The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity, entirely new properties appear..."

Further details do not matter here. The "more" that delivers the "different" is still being described in molecular terms. Anderson talks of the "reductionist hypothesis." I didn't see that he uses the word "emergence" to describe its failure. However, in so far as Anderson is describing emergence, it is emergence in the transition from few to many with both described in the molecular terms.

Leo Kadanoff, "More is the Same; Phase Transitions and Mean Field Theories." Journal of Statistical Physics, 137 (December 2009), Issue 5-6, pp. 777-797. DOI 10.1007/s10955-009-9814-1 or arXiv:0906.0653 [physics.hist-ph]

The remark "...phase transitions only occur in systems with an infinite number of degrees of freedom..." is a memorable turn of phrase, but also an extremely dangerous one for anyone who takes it literally. Real systems undergoing phase transitions, such as ice to water, have only a finite number of degrees of freedom as a result of their atomic constitution. If we take the remark literally, we have to assume that observations of phase transitions have refuted the atomic theory of matter.

Phase transitions are offered by Anderson and as example of how more is different. Leo Kadanoff agrees. He remarks on the title of his own later paper, "More is the Same; Phase Transitions and Mean Field Theories."

"The title of this article is a hommage to Philip Anderson and his essay "More is Different" ... which describes how new concepts, not applicable in ordinary classical or quantum mechanics, can arise from the consideration of aggregates of large numbers of particles. Since phase transitions only occur in systems with an infinite number of degrees of freedom, such transitions are a prime example of Anderson's thesis."

On Few-Many Reductions

One may wonder if this sense of emergence is so pervasive that one cannot seriously entertain its failure. If one is inclining in this direction, it is informative to review a selection of examples of successful few-many reductions. They do happen and sometimes with considerable importance.

For more on the ideal gas law and extraordinary range of its application, see The Fastest, Simplest, Quickest Derivation Ever of the Ideal Gas Law

In statistical physics, the prominent example is the ideal gas law. It is derived by considering the average pressure exerted by a single component over time. The pressure of the total system is then recovered merely by multiplying that pressure by the number of components.

In chemistry, a few-many reduction is standard. We note that two molecules of hydrogen react with one molecule of oxygen to produce two molecules of water. We immediately scale that up to a macroscopic system of order 6.02x1023 components by attributing identical behavior to 2 moles of hydrogen gas reacting with one mole of oxygen. This one example is typical of a great deal of chemistry.

In biology, in vitro studies seek to understand the behavior of individual cells isolated from their host organisms. Use of these methods persist because they have had many notable successes. They represent a success for few-many reduction. The obvious concern, however, is that some processes may not scale from the isolation of the in vitro environment to that of in vivo complexity. The competition over which of the two methods are appropriate has engendered some powerful denunciations of the "reductionist method." Here's one:

Marc H.V. Van Regenmortel, "Reductionism and complexity in molecular biology," EMBO Rep. 2004 November; 5(11): 1016–1020.

"The reductionist method of dissecting biological systems into their constituent parts has been effective in explaining the chemical basis of numerous living processes. However, many biologists now realize that this approach has reached its limit. Biological systems are extremely complex and have emergent properties that cannot be explained, or even predicted, by studying their individual parts. The reductionist approach—although successful in the early days of molecular biology—underestimates this complexity and therefore has an increasingly detrimental influence on many areas of biomedical research, including drug discovery and vaccine development."

One might suspect these matters of principle are entangled with very practical questions of who is to have research funds.

How Did This Confusion Arise? A Conjecture

There has clearly been a confusion in the literature. How did it happen? Its origins are likely tangled and unrecoverable in all detail. However I have a conjecture over how at least part of it arose.

Philosophers of science and condensed matter physicists tend to divide the world differently:

Philosophers of science tend to divide by theories.

This tendency has been inherited in philosophy of science through our logic-driven, philosophical forebears. A generation or two ago they analyzed everything in terms of deductive relations between propositions. The outcome was that a theory was not merely a few apt physical propositions pertinent to the system at hand. The theory was the logical closure of those proposition; that is, the set of all propositions deductively entailed my them.

As a result, one cannot introduce a small molecular-statistical feature of matter into a theory of thermodynamics without having to import the entirety of the statistical mechanical theory. One cannot have the thermodynamic version of the second law of thermodynamics in the same analysis as the statistical mechanical version. The first says that the entropy of a closed system can never decrease; the second says that it can with very small probability. If both appear, then then logical anarchy ensues since they contradict. All propositions are deducible from a contradiction.

The clean division under this regime is between two theories: an entirely non-statistical thermodynamics at the macroscopic scale; and a molecular-statistical mechanical analysis of the microscopic components.

Relations of reduction and emergence are then sought between these theories.

Under this regime, a few component analysis and a many component analysis belong within the one theory. The few component analysis is merely a simplification that arises by approximating the effect of the bulk of the components by the mean field they produce. It cannot be closed under deduction, for the approximation would eventually lead to propositions that contradict other parts of the theory. It is not a level in its own right that can enter into relations of reduction and emergence.

A point that is perhaps critical to understanding the debate in philosophy of science: philosophers who are suspicious of the notion of theory as a clearly demarcated unit will not readily default to this theory-based notion of level.

Condensed matter physicists tend to divide by scale.

For physicists, relevant fields of expertise are divided in some significant measure by scale. The length scale distinguishes particle physicists, who analyze matter in the very small, from cosmologists, who theorize on the largest scale possible. These theorists are not constrained by divisions between theories. A cosmologist feels quite free to draw on particle physics in the cosmology if it might help, say, elucidate the character of dark energy.

A second example is within particle physics. It was traditionally divided by energy scales. High energy interactions excite different processes than do lower energies.

Finally, condensed matter physics is demarcated by the types of physical systems addressed, such as liquids, solids and condensates. They are almost always systems with very many components. They are not distinguished by the theories which may be applied to them. The analysis may be classical or quantum mechanical, thermodynamic or statistical.

As a result, in the first cut, the field of condensed matter physics is demarcated by the scale of component number:

• A single atom of iron is not a system investigated by a condensed matter physicist, or at least not as a primary focus of interest.

• Many atoms of iron forming a magnet are a prime focus of interest. For once we have many atoms in a magnet, the internal structure of magnet can undergo phase transitions between ferro- and paramagnetic states that comprise a core subject in condensed matter physics.

In analyzing these many component systems, condensed matter physicists can call up whatever fragments of thermodynamics or statistical physics that advances their understanding.

The appearance of new phenomena in the transition from few to many components is important for it signifies novel phenomena peculiar to the condensed matter physics. As a result, it is natural for condensed matter physicists to seek relations of reduction or emergence between the few and many component level.

However, thermodynamic and statistical mechanical theories are not strictly compartmentalized by the philosopher's relation of deductive closure. Hence the levels are less well articulated and thus lesser candidates for relations of reduction and emergence.


The result of this difference of categorization is that claims concerning reduction and emergence do not travel well between the communities who divide the science differently. Claims of reduction and emergence can pass between them but they must be translated carefully, with their dependence on the different senses of level made clear. Otherwise we will have the confusion and miscommunication we now see.

Thanks to Alexander Reutlinger for pointing me to the literature on in vitro/in vivo experimentation in biology.

March 6, 2013. Minor edit, July 22, 2013. Copyright John D. Norton.