Confirmation, Induction and Science
Thursday 8 March - Saturday 10 March, 2007
London School of Economics
Titles and abstracts of the invited speakers' talks:
Philip Dawid (University College London)
Simple induction involves an inference that, when we meet conditions essentially similar to what we have seen in the past, we can expect them to develop in an essentially similar way. But often the new conditions differ from past experience in important ways, e.g. because of time trends, or because we have moved from merely observing to intervening. Then we need more imagination to infer what is likely to happen, and formal methods for producing and criticising such bold inductions. I will examine the variety of ways in which these problems are tackled in both associational and causal statistical inference.
Is Scientific Reasoning Really that Simple?
Malcolm Forster (University of Wisconsin, Madison)
Bayesian philosophy of science assumes that every pattern of scientific reasoning is a kind of probabilistic inference, or should be reconstructed as such. This view is supported by the Likelihood Principle, which is the philosophical theory of evidence that lies at the heart of Bayesian and Likelihoodist statistics (including AIC). The principle works well within the context of a single model. But scientific reasoning involves the comparison of rival models, and simple examples appear to show that the Likelihood Principle is false within this realm. This suggests that Bayesian and Likelihoodist philosophies of science are restricted in their scope. Moreover, there are alternative philosophical theories that more naturally match the patterns of argumentation commonly found in science.
Titles and abstracts
David C. Craig: Theory Evaluation
Theory evaluation requires an understanding of theories as well as the criteria used to evaluate them. In this paper, theories are analyzed in terms of models, hypotheses that link models and nature, and principles that specify model behavior. Examples are drawn from the kinetic theory of heat, the elastic solid theory of light and the quantum theory of spectroscopy. Theory evaluation is characterized in terms of (1) agreement with accepted belief and (2) assimilation of the multifarious to the theory’s models. Agreement with accepted belief provides evidence for realism, defined in pragmatic terms, but realism and assimilation are at odds with one another. Finally, the dependence of theories on models makes theories atavistic, so that when we evaluate modern theories we are in part evaluating an older language.
Robert G. Hudson: Robustness vs. Model Independence
My goal in this paper is to consider two separate but connected topics, one historical, the other philosophical. The first topic concerns the forms of reasoning contemporary experimental astrophysicists use to investigate the existence of WIMPs (weakly interacting massive particles). These forms of reasoning take two forms, one model-dependent and the other model-independent, and we examine the arguments one WIMP research group (DAMA) uses to support the latter. The second topic concerns recent support Kent Staley has offered for a form of scientific reasoning called ‘robustness’, and I argue that the model-independent strategy propounded by DAMA improves on robustness.
Nicholas Maxwell: The Problem of Induction and Metaphysical Theses Concerning the Comprehensibility and Knowability of the Universe
Even though evidence underdetermines theory, often in science one theory only is regarded as acceptable in the light of the evidence. This suggests there are additional unacknowledged assumptions which constrain what theories are to be accepted. In the case of physics, these additional assumptions are metaphysical theses concerning the comprehensibility and knowability of the universe. Rigour demands that these implicit assumptions be made explicit within science, so that they can be critically assessed and, we may hope improved. This leads to a new conception of science, one which we need to adopt in order to solve the problem of induction.
Frederick Eberhardt: Reliability via Synthetic A Priori – Reichenbach’s Doctoral Thesis on Probability
Reichenbach is well known for his limiting frequency view of probability. Perhaps less known are Reichenbach's early views on probability and its epistemology. In his doctoral thesis from 1915, Reichenbach espouses a Kantian view of probability, where the convergence limit of an empirical frequency distribution is guaranteed to exist thanks to the synthetic a priori principle of lawful distribution. Reichenbach claims to have given a purely objective account of probability, while integrating the concept into a more general philosophical and epistemological framework. Many of Reichenbach’s major developments in probability already surface – albeit in sometimes quite different form – in this early piece of work.
Nicholaos Jones: Resolving the Bayesian Problem of Idealization
In "Bayesian Confirmation of Theories that Incorporate Idealizations", Michael Shaffer argues that, in order to show how idealized hypotheses can be confirmed, Bayesians must develop a coherent proposal for how to assign prior probabilities to counterfactual conditionals. This paper develops a Bayesian reply to Shaffer's challenge that avoids the issue of how to assign prior probabilities to counterfactuals by treating idealized hypotheses as abstract descriptions. The reply allows Bayesians to assign non-zero degrees of confirmation to idealized hypotheses and to capture the intuition that less idealized hypotheses tend to be better confirmed than their more idealized counterparts.
Marcel Weber: The Crux of Crucial Experiments: Confirmation in Molecular Biology
I defend the view that experiments can provide a sufficient reason for preferring one among a group of hypotheses against the widely held belief that "crucial experiments" are impossible. My argument is based on the examination of a historical case from molecular biology, namely the Meselson-Stahl experiment. "The most beautiful experiment in biology", as it is known, provided the first experimental evidence for the operation of a semi-conservative mechanism of DNA replication, as predicted by Watson and Crick in 1953. I use a mechanistic account of explanation to show that this case is best construed as an inference to the best explanation (IBE). Furthermore, I defend IBE against Van Fraassen's "bad lot" objection.
Leah Henderson, Noah Goodman, Josh Tenenbaum and Jim Woodward: Frameworks in science: a Bayesian approach
The idea that there are frameworks guiding learning of specific scientific theories has been a theme in philosophy of science, articulated in different ways by Carnap, Kuhn, and others. The role of frameworks can be brought into the realm of confirmation theory and described in Bayesian terms using hierarchical Bayesian models (HBMs), which have already proved useful for modelling individual learning in cognitive science. HBMs, together with new techniques for performing Bayesian inference in these settings, also provide a new perspective on some problems facing Bayesians, in particular how to deal with changes in the space of hypotheses.
Michael Weisberg: Robustness Analysis and the Volterra Principle
Theorizing in ecology and evolution often proceeds via the construction of multiple idealized models. To determine whether a theoretical result actually depends on core features of the models and is not an artifact of simplifying assumptions, theorists have developed the technique of robustness analysis, the examination of multiple models looking for common predictions. A striking example of robustness analysis in ecology is the discovery of the Volterra Principle, which describes the effect of general biocides in predator-prey systems. This paper details the discovery of the Volterra Principle by robustness analysis. It considers the classical ecology literature on robustness and introduces two individual-based models of predation, which are used to further analyze the Volterra Principle. The paper also introduces a distinction between parameter robustness, structural robustness, and representational robustness, and demonstrates that the Volterra Principle exhibits all three kinds of robustness. Finally, the paper discusses the confirmation theoretic role of the three types of robustness analysis.
Renato Kinouchi: Peirce in the long run: remarks on knowledge a ulteriori
At first sight, probability makes the future uncertain and unpredictable. This claim, however, is not completely true. C. S. Peirce defended that in probabilistic matters we are not able to predict singular cases, but we can have accurate knowledge about frequencies in the long run — there is a collective necessity that constrains large collections of cases. Nevertheless, this view does not fit in the Kantian traditional classification of synthetic knowledge as a priori and a posteriori. Probability is not synthetic a posteriori because implies collective necessity. But, at same time, probabilistic statements cannot be synthetic a priori since their necessity only applies to indefinite series, but not to the next case. Our central concern is that, in the Kantian framework, there is no clear place for the necessity implied by long run frequencies.
Juha Saatsi: ‘Material Theory of Induction’ and Scientific Realism
John Norton has advanced a general view of induction—‘Material Theory of Induction’—that renders ampliative reasoning in a deep sense local. This paper is a sympathetic appraisal of this view, applying it to the scientific realism debate. I will argue that the scientific realist should turn to such local construal of ampliative reasoning in her attempt to justify beliefs about unobservables. More generally, the distinction that Norton draws between ‘material’ and ‘formal’ theories of induction is helpful in contrasting the intuitions behind various realist arguments, and their strengths and weaknesses. As far as justificatory challenges of induction are concerned, it is in this context that the Material Theory of Induction pays most dividends.
Jon Williamson: Objective Bayesianism as Scientific Confirmation
This paper discusses the view that confirmation of scientific theories can be accounted for by means of the objective Bayesian interpretation of probability. The paper performs three tasks. First, it gives a historical introduction to objective Bayesianism. Second, it discusses one of the main criticisms of objective Bayesianism. Some critics have rejected objective Bayesianism on the grounds that it disagrees with results obtained by Bayesian conditionalisation; in contrast I argue that one should prefer objective Bayesian updating over Bayesian conditionalisation. Third, the paper discusses the application of objective Bayesianism to the philosophy of science. I argue that objective Bayesianism is no harder to apply to the philosophy of science than are other varieties of Bayesianism, and that the objectivity of objective Bayesianism is what we want from an account of scientific confirmation.
Jiji Zhang and Peter Spirtes: Detective of Unfaithfulness and Robust Causal Inference
Many algorithms for inferring causality from statistical data developed in the AI community are grounded on two assumptions, known as the Causal Markov Condition and the Causal Faithfulness (or Stability) Condition. Philosophical discussions of the latter condition have focused on how often and in what domains we can expect it to hold or fail. In this paper we examine the faithfulness condition from a different, testing perspective. The main result we present, simple as it is, has important theoretical implications and practical consequences. On the theoretical side, it points to a strictly weaker Faithfulness condition which is nonetheless sufficient to justify causal inference. One the practical side, it shows a way to make some causal inference procedures more robust. The latter, we argue, is related to the interesting issue of “uniform consistency” in causal inference.
Christopher Pincock: From Sunspots to the Southern Oscillation: Confirming Models of Large-Scale Phenomena in Meteorology
The epistemic problem of assessing the support that some evidence confers on a hypothesis is considered using an extended example from the history of meteorology. In this case, and presumably in others, the problem is to develop techniques of data analysis that will link the sort of evidence that can be collected to hypotheses of interest. This problem is solved by applying mathematical tools to structure the data and connect it to the competing hypotheses. I conclude that mathematical innovations provide crucial epistemic links between evidence and theories precisely because the evidence and theories are mathematically described.
Wendy Parker: How to Think about Models and Their Evaluation
Starting from the view that scientific models are representational tools, I consider how the task of model evaluation can be sensibly and fruitfully conceptualized. I suggest that model evaluation should be thought of as an investigation of the adequacy of a model for a set of purposes, and I argue that there are advantages to approaching the task of model evaluation with the goal of severely testing for, rather than building confidence in, adequacy-for-purpose. I illustrate these ideas in the context of climate modeling.
Mike Titelbaum: Unlearning What You Have Learned
Bayesian modeling techniques have proven remarkably successful at representing rational constraints on agents’ degrees of belief. Yet Frank Arntzenius’s “Shangri-La” example shows that these techniques fail for stories involving forgetting. This paper presents a formalized, expanded Bayesian modeling framework that generates intuitive verdicts about agents’ degrees of belief after losing information. The framework’s key result, called Generalized Conditionalization, yields applications like a version of Bas van Fraassen’s Reflection Principle for forgetting. These applications lead to questions about why agents should coordinate their doxastic states over time, and about the commitments an agent can make by assigning degrees of belief.
John D. Norton: Induction without Probabilities
A simple indeterministic system is displayed and it is urged that we cannot responsibly infer inductively over it if we presume that the probability calculus is the appropriate logic of induction. The example illustrates the general thesis of a material theory of induction, that the logic appropriate to a particular domain is determined by the facts that prevail there.
Brian Davies: Newton's concept of induction
We describe Newton's concept of induction as presented in Principia, and investigate the extent to which he actually used this concept in deriving his law of gravitation. Our conclusion is that his 'General Scholium' misrepresents the contents of Principia, for known political reasons, and that the main text of Principia follows a remarkably sophisticated methodology.
Kent Staley: Can Errror-statistical Inference Function Securely?
This paper analyzes Deborah Mayo's error-statistical (ES) account of scientific evidence in order to clarify the kinds of "material postulates" it requires and to explain how those assumptions function. A secondary aim is to explain and illustrate the importance of the security of an inference. After finding that, on the most straightforward reading of the ES account, it does not succeed in its stated aims, two remedies are considered: either relativize evidence claims or introduce stronger assumptions. The choice between these approaches turns on the value attached to two aims of inquiry that are in tension: drawing strong, informative conclusions and reasoning securely.
Lilia Gurova: A Plea for a Moderate Anti-Justificationism
Traditionally, the label “anti-justificationism” has been attached to the claim that inductive inferences are not justifiable. The anti-justificationsim, which will be presented here is different. As it admits the existence of strengthened inductive generalizations, which are justified, it is not against the possibility of justification of induction, it is against conceiving justifiability a supreme and unconditional epistemic virtue. The proposed view is not against strengthening the inductive generalizations in general (and in this sense it is “moderate”); it simply states that scientists must be aware of the price, which they pay for making their inductive inferences stronger. The view is supported by an example drawn from the history of the so-called “cathode rays”.
Deborah G. Mayo: Double Trouble: Use-constructing Methods and Their Errors
A common intuition about evidence is that if data x have been used to construct a hypothesis H(x), then x should not be used again as evidence in support of H(x). However, some cases of "use-constructed" hypotheses succeed in being well-tested by the same data used in their construction. To determine when this is so, I argue, we need to consider the properties of the use-construction method and the specific errors that need to be ruled out to sustain the inference of interest. I illustrate with a number of examples that have arisen in discussions of double-counting and selection effects in philosophy of science and statistics.
Aris Spanos: The Curve-Fitting Problem, Akaike-Type Model Selection, and the Error Statistical Approach
The main aim of this paper is to revisit the curve-fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest’ curve. It is argued that the current framework for addressing this problem is unduly dominated by the mathematical approximation perspective, and pays insufficient attention to statistical adequacy. The error-statistical approach provides a more appropriate framework because (i) statistical adequacy provides the criterion for the fittest curve, and (ii) the relevant error probabilities can be used to assess the reliability of inductive inference. The arguments are illustrated by comparing the Kepler and Ptolemaic models on empirical grounds.
The conference is generously supported by the Center for Philosophy of Science, University of Pittsburgh, the Centre for Philosophy of Natural and Social Science at the LSE, the British Society for the Philosophy of Science and the Mind Association.
The Center for Philosophy of Science, University of Pittsburgh, encourages philosophers of science to apply for Visiting Fellowships at the Center. For details, click here.
The Centre for Philosophy of Natural and Social Science at the LSE invites philosophers of science to apply for long-term and short-term Visiting Fellowships at the Centre. For details, click here.
Stephan Hartmann, LSE
John Norton, Pittsburgh
Nancy Cartwright, LSE
Philip Dawid, University College London
Branden Fitelson, Berkeley
Malcolm Forster, Wisconsin
Allan Franklin, Colorado
Stephan Hartmann, LSE
John Norton, Pittsburgh
Jon Williamson, Kent
John Worrall, LSE