home >> lectures>>Tsinghua

Philosophy of Physics

Pitt-Tsinghua Summer School for Philosophy of Science
Institute of Science, Technology and Society, Tsinghua University
Center for Philosophy of Science, University of Pittsburgh

At Tsinghua University, Beijing
June 27-July 1, 2011

Tsinghua logo

John D. Norton
Department of History and Philosophy of Science
Center for Philosophy of Science
University of Pittsburgh
homepage

school photo
click for larger

Einstein's Discovery of the Special Theory of Relativity

Einstein

powerpoint

Monday June 27

Einstein's special theory of relativity was the first of the new theories of the modern physics of the 20th century. It is of special interest to philosophy of physics for its methodology. Einstein gave the theory an especially simple, axiomatic formulation and his famous paper contains the most important conceptual analysis of the 20th century, Einstein's operational analysis of distant simultaneity.

What Einstein did with special relativity became a model for discovery in science that many later theorists tried to replicate. Grasping this model requires a clear understanding of what Einstein did. However, even today, our understanding of how Einstein arrived at special relativity remains incomplete. The popular view has an undisciplined Einstein speculating wildly about time, clocks, light signals and simultaneity; and the theory supposedly emerges from this imaginative chaos.

Work over many decades by historians of science has revealed a different story. What led Einstein to special relativity was, by his own accounting, seven and more years of systematic work on problems in electrodynamics. The breakthrough that is the special theory of relativity came only after Einstein had exhausted every other possibility. He had no where left to go and, in desperation, was willing to entertain extreme measures.

A full understanding of how Einstein found special relativity should concentrate on Einstein's struggles in electrodynamics and, in particular, it should pursue how he sought to develop a so-called "emission" theory of light within the electrodynamics of his era. It was the only way, Einstein mistakenly believed, to implement a principle of relativity in electrodynamics. The real story lies the years of Einstein's attempts to develop the emission theory and the crisis provoked by his failure.

Elsewhere, I have sought to understand this phase of Einstein's work. In this talk, I will focus on two smaller steps in Einstein's pathway. They are chosen because they have special interest to philosophers of science.

The first is Einstein's famous "chasing a light beam" thought experiment that provided his first step towards the special theory of relativity. If read as presented, the thought experiment it obscure. It is unclear just how Einstein arrives at the various outcomes claimed. A small minority of later authors have admitted its unintelligibility. Most, however, do not. They pretend that they understand it, with the unfortunate consequence that hapless readers are left doubly baffled by their failure to follow Einstein's account and that of the commentator as well!

There is, however, a way to make sense of Einstein's thought experiment. The mistake is to analyze the thought experiment in the context of ether theories of light. If instead we read the thought experiment in the context of emission theories of light, it becomes quite transparent. It turns out to implement many of the objections Einstein later recalled against emission theories of light.

The second step is Einstein's discovery of the relativity of simultaneity. This is the breakthrough that showed Einstein how to reconcile his principle of relativity with the constancy of the speed of light. The first section of Einstein's 1905 paper gives his much celebrated analysis in terms of the synchronizing of clocks by light signals. This has promoted the idea that Einstein discovered the relativity of simultaneity by pondering how to synchronize clocks with light signals.

However we have no evidence that this was how Einstein discovered the relativity of simultaneity; he left us no recollections of this form. Indeed all reports of his thinking prior to the paper concern light not as a signal (a spatially localized pulse) but as a spread out waveform.

My proposal is that there is another route that Einstein could plausibly have followed to the relativity of simultaneity. Once one adopts the principle of relativity, certain experimental results are recognizable as direct experimental manifestations of the relativity of simultaneity. Those experiments include stellar aberration, Fizeau's measurement of the speed of light in moving water and Airy's water filled telescope experiment. That Einstein recognized this also and thereby found the effect would explain Einstein's repeated insistence on the importance of these particular experiments in his discovery of special relativity.

Reading:

John D. Norton, "Discovering the Relativity of Simultaneity: How Did Einstein Take 'The Step'," Trans. to Chinese, Wang Wei. In Einstein in a Trans-cultural Perspective. Eds. Yang Jiang, Liu Bing. Tsinghua University Press. Download Chinese and English versions.

John D. Norton, "Chasing the Light: Einstein's Most Famous Thought Experiment," prepared for Thought Experiments in Philosophy, Science and the Arts, eds., James Robert Brown, Mélanie Frappier and Letitia Meynell, Routledge. Download.

John D. Norton, "Einstein's Special Theory of Relativity and the Problems in the Electrodynamics of Moving Bodies that Led him to it." in Cambridge Companion to Einstein, M. Janssen and C. Lehner, eds., Cambridge University Press. Forthcoming. Download. (See especially Section 4.)

Much of this material is repeated in the Goodies pages:
"Chasing the Light: Einstein's Most Famous Thought Experiment" and
"Discovering the Relativity of Simultaneity: How did Einstein take "The Step"?"

Background:

For a more technical development that includes the electrodynamics and more, see
John D. Norton, "Einstein's Investigations of Galilean Covariant Electrodynamics prior to 1905," Archive for History of Exact Sciences, 59 (2004), pp. 45-105. Download.

More generally, see the references listed in the section of my website, History of Special Relativity and Einstein's Work of 1905.

Causation as Folk Science

clouds

powerpoint

Tuesday, June 28

This talk develops a form of skepticism about causation; and then argues that this skepticism is compatible with the frequent use of causal labels in physics.

Things in the world are connected in many ways and it is the business of science to discover these connections. It would be folly to try to legislate in advance what those connections must be. My concern is that causal metaphysicians commit this folly. They interrogate one another industriously as to whether they would say "this" causes "that" in situations that are increasingly contrived and bewildering.

At best, the outcome is the writing of an entry in a dictionary; and not a very interesting one, since it merely codifies a usage momentarily acceptable to a small group determined to corrupt each other's intuitions. At worst, the causal metaphysicians are confusing this writing of a dictionary entry with the discovery of how things are connected in the world factually.

I will make these nebulous worries more precise by identifying a particular doctrine I call "causal fundamentalism" and posing a dilemma for it. That doctrine asserts that the world is governed fundamentally by a principle of causality and that it is the burden of the individual sciences to find the expression of that principle in their specialized domains.

The dilemma asks whether this principle is factual or not. If it is factual, then causal fundamentalists are asked to state just how the principle factually constrains things in the world. All efforts over millennia to do this, I urge, have failed. And they must fail if one thinks that a priori science is impossible.

If the principle is not factual, then our use of causal language is merely one of labeling, which is the view I hold. That does not make the labeling valueless. However the value is pragmatic in providing a comfortable language in which to discuss whichever sorts of connectedness interests us.

Physics routinely talks of causality. Both relativity and quantum theory are founded on various causality conditions. I will argue that these conditions do not vindicate causal fundamentalism. Rather, they arise merely as useful labels for contingent features of our present science: that spacetime has a lightcone structure and that propagations outside the lightcone do not occur.

Finally, I will respond to an apparent use of a principle of causality in the literature on scattering theory and argue that there is no cogent principle invoked.

Reading:

John D. Norton, "Causation as Folk Science," Philosophers' Imprint Vol. 3, No. 4

John D. Norton, "Do the Causal Principles of Modern Physics Contradict Causal Anti-Fundamentalism?" pp. 222-34 in Thinking about Causes: From Greek Philosophy to Modern Physics . eds. P. K. Machamer and G. Wolters, Pittsburgh: University of Pittsburgh Press, 2007.

John D. Norton, "Is There an Independent Principle of Causality in Physics?" British Journal for the Philosophy of Science, 60 (2009), pp. 475-86.

Information and Thermodynamic Entropy

compression

powerpoint

Wednesday, June 29

It is often supposed that there is a deep connection between information and thermodynamic entropy. It is the sort of profound discovery that must interest philosophers of science. For it would establish a connection between the content of the abstract world of ideas and the magnitudes that govern the operation of steam engines.

There is a superficial connection. The thermodynamic entropy of a state grows as a function of the number of microstates associated with it; and the formula for that function is the same as Shannon's measure of information. That similarity of two formulae has provided a starting point for attempts at more elaborate, principled theories that related information and thermodynamic entropy.

An older proposal was "Szilard's principle" that attaches a thermodynamic entropy cost of k ln 2 to the acquisition of one bit of information. It has been contradicted and replaced by a newer principle, "Landauer's principle," that attaches an entropy cost of k ln 2 to the erasure of one bit of information. Landauer's principle is now the central tenet of the new field of the "thermodynamics of computation." According to it all computations can in principle be conducted without entropy cost excepting those computational processes like erasure that map many states to one.

All this looks like the development of another exciting branch of modern science, replete with principles, laws, theorems, illustrative thought experiments and novel insights. Alas, deeper scrutiny of the field shows that this is not so. The connections posited between information and thermodynamic entropy are little more than groundless speculation. The arise through circular reasoning, unjustified extrapolation from a few thought experiments and misapplications of statistical physics. For example, we may lack information and so assign a probability p to certain outcomes. On that basis alone, we cannot now form the familiar "p ln p" and declare it a thermodynamic entropy. Yet just that declaration is common in this literature.

My talk will briefly review the origins of this literature in Maxwell's 19th century positing of a tiny demon with the power to reverse the second law of thermodynamics. The modern literature developed from efforts to argue that this demon must fail because it must process information to function and that processing would incur entropy costs that defeat it.

I will review briefly the various approaches that have been used to demonstrate Landauer's principle, indicating why all of them have failed. I will then develop what I believe is a problem quite fatal to the present "thermodynamics of computation."

That field is based on the supposition that one can carry out thermodynamically reversible processes at molecular scales and use them to implement computational processes. On the contrary, I will show that all such processes are disrupted by thermal fluctuations and that these disrupting fluctuations can be overcome only by dissipating quantities of entropy greater than those tracked by the Landauer limit.

Reading:

"Waiting for Landauer," Studies in History and Philosophy of Modern Physics, forthcoming. Download.

Goodies pages:
When a Good Theory meets a Bad Idealization: The Failure of the Thermodynamics of Computation.
No Go Result for the Thermodynamics of Computation

Background

John Earman and John D. Norton, "Exorcist XIV: The Wrath of Maxwell's Demon." Studies in the History and Philosophy of Modern Physics, Part I "From Maxwell to Szilard" 29(1998), pp.435-471; Part II: "From Szilard to Landauer and Beyond," 30(1999), pp.1-40. Download.

John D. Norton, "Eaters of the Lotus: Landauer's Principle and the Return of Maxwell's Demon." Studies in History and Philosophy of Modern Physics, 36 (2005), pp. 375-411. Download.

Cosmic Confusions: Not Supporting versus, Supporting Not-

galaxy

powerpoint

Thursday, June 30

It has become routine to think of the probability calculus as the universally applicable logic of induction. This approach has enjoyed many notable successes. However it has its limits and, when these are passed, an uncritical use of probabilistic analysis can create considerable mischief.

Probabilistic analysis fails, I argue, in the extreme case of complete neutrality of evidential support ("complete ignorance" in subjective terms). In that case, the null import of evidence is properly captured by a non-additive distribution. It assigns a neutral support value "I" (="ignorance") to each of the mutually exclusive outcomes a, b, c, ... and to their contingent disjuncts (a or b), (a or c), (a or b or c), ... (Caution: this last assignment is highly non-standard!)

Cases of complete neutrality of support can be found fairly readily in cosmology and especially in examples favored by philosophers. If one insists on treating cases of complete neutrality of support in a probabilistic analysis, one ends up committing the "inductive disjunctive fallacy." That fallacy allows us to infer incorrectly through essentially a priori reasoning that there very probably has to be something rather than nothing; and that the universe is very probably spatially infinite.

The doomsday argument purports to show that, more likely, cosmic doom is coming sooner. It is another example of a spurious result that derives entirely from the use of the wrong inductive logic. I will sketch a better way to handle the same example by means of a non-probabilistic logic of induction.

Reading:

John D. Norton, "Cosmic Confusions: Not Supporting versus Supporting Not-". Philosophy of Science, 77 (2010), pp. 501-23. Download.

Optional:

For a related example of Bayesian failure that I will not discuss, see:
"There are No Universal Rules for Induction," Philosophy of Science, 77 (2010) pp. 765-77. Download.
Goodies page: "Induction without Probabilities."

Approximation and Idealization: Why the Difference Matters

Halftone image

powerpoint

Friday, July 1

My principal goal in this talk is to develop a better understanding of how infinite limits are used in statistical physics

First, however, I will need to clarify some terminology. The terms "approximation," "idealization" and "model" appear frequently in science. However there seems to be no standard usage. So, I will stipulate how these terms will be used in my talk:

Approximation will be used for a linguistic entity. It is an inexact description of some system. If we have a boiling pot of stew, we may describe it inexactly by the sentence "Its temperature is 100oC." The description is inexact since 100oC is the boiling point of pure water under normal conditions. The boiling point of stew, a mixture of water and many other things, can vary a little from it.

Idealization will be used to designate a novel system, real or imaginary, that can be used to generate inexact descriptions of the original system. In the stew example, the idealization is an imaginary pot of pure water that does boil at exactly 100oC. This exact description of the pure water provides an inexact description of the boiling stew.

Now to statistical physics: Gases, solids and liquids consist of very many component molecules. They behave like continuous substances only because there are very many, very small components. We recover their macroscopic behavior by considering the limit of infinitely many components.

What is the nature of this limit? If it is used to arrive at a fictitious system of infinitely many components, then that system is an idealization of the real, finite component system. I will urge that this is a dangerous idealization to attempt. For, in many cases, the infinite system has properties very different from those of the finite system; it may be indeterministic, for example. Or the infinite system may have to bear inconsistent properties and thus not exist at all.

A safer approach is to consider the properties of the finite systems as a function of the number n of components. Then we can consider the limit of that function as its argument n goes to infinity. What results is a limit property. It is merely a useful, but inexact description of the finite component system. It is an approximation. The gain is that it avoids the unexpected misbehavior of the systems of infinitely many components.

Reading:

John D. Norton, "Approximation and Idealization: Why the Difference Matters". manuscript. Download.