home
   ::: about
   ::: news
   ::: links
   ::: giving
   ::: contact

events
   ::: calendar
   ::: lunchtime
   ::: annual lecture series
   ::: conferences

people
   ::: visiting fellows
   ::: resident fellows
   ::: associates

joining
   ::: visiting fellowships
   ::: resident fellowships
   ::: associateships

being here
   ::: visiting
   ::: the last donut
   ::: photo album


::: center home >> events >> conferences >> other >> 2007-08 >> fellows >>      program

6th Quadrennial Fellows Conference Abstracts
Saturday, 20 July - Thursday, 24 October 2008
Center for Philosophy of Science
University of Pittsburgh

::: back to program

Paulo Cesar Coelho Abrantes
Mindreading and Human Evolution
I would like to appraise different accounts of the role mindreading abilities might have played in human evolution. In one scenario, cumulative culture ostensibly depends on a special capacity for social learning through imitation, which might have been adaptive in those (physical and social) environments our ancestors seemingly lived. Culture is seen, here, not only as the proximate cause of the diversity in human behavior, but also as the ultimate cause of psychological dispositions which favored better learning and transmission of cultural variants

Alexander Afriat
The Relativity of Inertia and Reality of Nothing
We first see that the inertia of Newtonian mechanics is absolute and troublesome. General relativity can be viewed as Einstein's attempt to remedy, by making inertia relative, to matter - perhaps imperfectly though, as at least a couple of freedom degrees separate inertia from matter in his theory. We consider ways the relationalist (for whom it is of course unwelcome) can try to overcome such undetermination, dismissing it as physically meaningless, especially by insisting on the right transformation properties.

Aristides Baltas
Paradigm Change as Grammatical Change: Older and Newer Issues
If radical paradigm change in science is regarded as grammatical change, some of the old issues seem to become solved or dissolved while some novel issues arise. Among the former are incommensurability, relativism, progress, communication breakdown and so forth while among the latter prominent is the relation of grammar to logic and the effects of this relation on the discipline of historiography of science. The paper will outline the general idea involved (i.e. viewing paradigm change as grammatical change), will sketch the ways in which old issues as the above tend disappearing, and will try tracing the directions to be followed for tackling the novel issues. 

Michael Bradie
Karl Popper, Charles Darwin and the Most Impressive 19th century Evolutionary Philosopher
In his intellectual biography, Karl Popper wrote I have always been extremely interested in the theory of evolution, and very ready to accept evolution as a fact.  I have also been fascinated by Darwin as well as by Darwinism – though somewhat unimpressed by most of the evolutionary philosophers; with the one exception, that is, of Samuel Butler.

Popper had a well known stormy relationship with Darwinism and the theory of evolution. On the one hand, his theory of scientific progress through conjectures and refutations is an epistemological analogue to the theory of natural selection.  On the other hand, he was extremely ambivalent about the theory itself, characterizing it at one point as an untestable hypothesis, at another as a useful metaphysical research program and on yet other occasions as a straightforward scientific theory. 

In my paper, I propose to explore the tangled web that connects Popper to Darwinism, on the one hand, and to take a careful look at Samuel Butler’s evolutionary philosophy in an attempt to determine what merit, if any, there is to Popper’s high regard for his views.

Giovanni Camardi
How Reliable Is the Concept of “Genetic Information”?
A major problem in biology is the lack of a satisfactory theory of genetic expression. Information Theory has offered a transmission scheme that is up to explain causal interactions between genes and proteins and a sort of algebraic analysis for measuring the orderliness and complexity of these interactions. As a result, a multistage activity of computational and statistical modeling has been developed. But the tremendous growth in our capacity of encoding and computing biological data has not been paralleled by an adequate discovery of regularities. This state of affairs raises a number of philosophical issues. I will try to figure out to what is the pay-off of applying the theory of information to genetics and using a concept of “genetic information”.

Scott Carson
Aristotle's Conception of the Relation Between the Science of Geometry and the Ontology of Physical Space
In a notorious passage of his Physics (III.7, 207a33-b34) Aristotle remarks that mathematicians "make no use of" the infinite in their proofs, relying instead on finite lines "of any size they wish". Since Euclid’s Fifth Postulate requires lines of infinite, not finite, length, Aristotle’s remark raises the question of how he conceived of the relation between the science of geometry and the ontology of physical space. This paper explores the philosophical principles according to which Aristotle, on the one hand, excludes actually infinite magnitudes on the grounds that such entities would exceed the boundaries of the kosmos, and yet, on the other hand, rejects appeals to merely potentially infinite magnitudes from the domain of geometry for reasons only remotely connected to the ontology of physical space.

Philip Ehrlich
A Re-examination of Zeno's Paradox of Extension
Several attempts have been made over the centuries to resolve Zeno's Paradox of Extension. Of these, the best known and most influential is the one due to Adolf Grünbaum. However, Zeno's Paradox of Extension may be, and I believe should be, construed as a general paradox of extension, not merely as a paradox of extension in classical real space. But (at a minimum) this raises serious questions about the generality of Grünbaum's treatment since it appeals to considerations regarding the classical continuum which are not applicable to the general case. In my talk, I will present a novel and altogether elementary alternative to Grünbaum's famous analysis which does not suffer from this limitation. 

Mehmet Elgin
How Could There Be True Causal Claims Without there being Special Causal Facts in the World
There is a consensus among philosophers that truth of causal claims has to be grounded on facts in the world. To say otherwise seems to invite mysterious powers into our ontology, which no one would like. Thus, when I say that for causal claims to be true, we don’t need to postulate any special kinds of facts in our ontology, I may seem to be stating a truism, which no one would disagree. Yet, I think that even if this turns out to be the consequence, it is worth investigating some subtle differences what we mean by ‘facts’. Since there is no disagreement in saying that truth of causal claims must be grounded on facts, if there is any disagreement it must be about what kind of facts we should allow in our ontology. In this paper, I will adopt the minimalist principle that says that “don’t postulate the existence of entities if they are not necessary to make causal claims true”. The rationale here is that once we show that causal claims can be objectively true without there being special causal powers, the postulation of special causal powers is not dictated by the evidence or by ordinary facts; so such an activity will invite us to engage in unnecessary metaphys. On the other hand, special sciences do postulate special kinds of properties with respect to their discourse (natural selection, intentions, and cultures to name the few). Is this practice then unjustified? No such thing follows from the minimalism I endorse. But, there is a caveat. Minimalism is consistent with the view that each of these properties postulated by special sciences is necessary so we can postulate them in our ontology. I will, however, argue that truths of the claims about these properties don’t require that such properties be part of the ontology of the world. Perhaps, this thesis, once spelled out in more detail, is uncontroversial. If this is so, then this paper can be considered as an attempt to clarify what exactly this truism amounts to. By using Mondadori and Morton’s (1976) argument against moral realism as a foil, I will argue that it is possible to be causal anti-realists with respect to the claim that what makes causal claims true must be essentially different from what makes non-causal claims true and that causal claims can be objectively true. I will, then, briefly discuss the relevance of this thesis to the issue of mental causation and the issue of whether natural selection is a causal process. 

Malcolm Forster
Towards a Unification of Special Relativity and Quantum Mechanics

There is a long-standing problem in the foundations of physics-how to understand quantum mechanics (the physics of the very small) and Einstein's theory of special relativity with a single unified framework. The paper describes an approach to this problem being developed by Alexey Kryukov.

The basic conceptual innovation is to define space, the space of everyday experience, and relativistic spacetime, as intrinsically quantum mechanical objects.  The new view dispenses with the common idea that a relativistic quantum mechanics should be built on top of a pre-existing curved spacetime.

It is an important fact that the framework is very tightly constrained.  In order for spacetime to be defined as an intrinsically quantum mechanical object, a specific structure (a specific metric or inner product) has to be imposed on the quantum mechanical space of states.  But as soon as that hypothesis is in place, many of our most cherished assumptions about the ontology of space and time are revised in a way that is provably consistent with the tested predictions of special relativity and non-relativistic quantum mechanics.  New predictions are made and new conceptual horizons open up.  The new theory promises to have implications for many widely discussed issues in the foundations of physics, such as the objectivity or the reality of time, the nature of time-energy uncertainty, non-locality, and the Einstein-Podolsky-Rosen argument, to name a few.

Rick Grush
The Objects of Demonstrative Thought
Gareth Evans initiated a project of understanding the psychological and perceptual mechanisms that underlie our capacity to grasp demonstrative thoughts, and hence to understand utterances employing demonstrative expressions, such as 'that bird' or 'this book'. The mechanisms of spatial representation and the tracking of objects over time were key components of his account of our ability to grasp demonstrative thoughts about material objects. Evans project has given rise to followers, including Austen Clark and John Campbell, who have misunderstood some fundamental features of Evans' project, and as a result offered 'improvements' on it that are far less adequate than Evans' original proposals. In one-sentence form, space was for Evans crucial for the capacity to think about physical objects, but was not a requirement of demonstrative thought per se -- though because it is involved in the capacity to grasp thoughts about physical objects, it will be involved in those demonstrative thoughts whose objects are physical objects. Subsequent accounts have placed spatial representation at the heart of demonstrative identification, with disastrous results. In this talk I will first briefly outline Evans' account, and will then briefly describe Clark's and Campbell's accounts. I will then offer a brief outline of my own proposal, which can be understood as a friendly amendment of Evans' original project.

Lilia Gurova
Models as Inferential Machines
Although the currently popular views on scientific models have made divergent claims about the proper role of models in science, they seem to agree on that models are in the main representational entities. The view, which will be defended in this paper does not deny the representational properties of models, nevertheless, it is non-representationist in an important sense: it builds on the idea that what makes models useful in science is not their representational adequacy but rather their inferential power. The view of models as inferential machines will be demonstrated on examples taken from physics, statistics, and cognitive science.

Fred Kronz
Non-Monotonic Probability Theory for EPR-Correlated Quantum Systems
In previous work, I formulated and then used a non-monotonic theory of probability to systematize simple interference experiments for two-state quantum systems, and later generalized these results to n-state systems (for any n). The investigation below shows how a pair of EPR-correlated two-state systems may be systematized within this framework. After presenting that material, the main result of the investigation, the discussion focuses on conditional probabilities and their relevance for interpretive matters.

Marion Ledwig
Common Sense and Folk Psychology
With regard to the question of what folk psychology can do, it is much easier to predict and retrodict events and behaviour, if one has an explanation for an event or a behavior. So a functionable folk psychology would do itself good to have an explanation function. Because folk psychology in order to be successful doesn’t have to be correct all the time, its laws and generalizations don’t have to be so precise. Because folk psychology only has to be correct in most of the cases, it also doesn’t have to give an account of irrational or abnormal cases. 

James G. Lennox
Functions and History
In her classic ‘In Defense of Proper Functions’, Ruth Garrett Millikan claims to defend the idea that the concept of ‘proper function’ “looks to the history of an item to determine its function rather than to the item’s present properties or dispositions.”  This paper looks at a standard historical objection to the Millikan account and the standard dismissal of that objection.  This debate, I argue, rests on a misunderstanding of the nature of scientific concepts.  What is needed is a phylogenetic analysis of concepts such as ‘function’ that allows us to see what the relevance of the history of science is in evaluating Millikan’s argument.  What has been missing in this debate, ironically, is some actual history.

Gerald J. Massey
Some Reflections on the Duhem-Quine Thesis
Pierre Duhem argued, in logical and historical detail, that no theoretical hypothesis can be empirically falsified.  After W.V.O. Quine had allegedly extended Duhem’s reasoning to all statements without exception, the resulting form of Duhem’s claim became known as the Duhem-Quine or (better) the Quine-Duhem thesis.  I am going to take a close look at Duhem’s original thesis and the reasoning behind it, and then at Quine’s alleged generalization.  I will show that Duhem himself resisted the two principal moves made by Quine.  I will also show that Duhem’s original thesis is more nuanced than traditionally recognized, and that the so-called strong version of his thesis, which Adolf Grünbaum insists has never been substantiated, is readily proved.  I trace the neglect of proofs like this one to the robustly empirical mind, which views the free-wheeling exertions of the logico-mathematical mind with distaste and disdain.

Storrs McCall
The Consistency of Arithmetic
"God exists because mathematics is consistent, and the Devil exists because we can't prove it".     (Hermann Weyl)

Is Peano arithmetic (PA) consistent?  This paper contains a proof that it is:- a proof moreover that does not lie in deducing its consistency as a theorem in a system with axioms and rules of inference.  Gödel's second incompleteness theorem states that the consistency of PA cannot be proved in PA, and to deduce its consistency in some stronger system PA+ that includes PA is self-defeating, since if PA+ is itself inconsistent the proof of PA's consistency is worthless.  In an inconsistent system everything is deducible, and therefore nothing is provable.  If there is to be a genuine proof of PA's consistency, it cannot be a proof relative to the consistency of some other stronger system, but an absolute proof.  The paper contains a consistency proof based on semantics, not on pure syntax as in axiomatic theory-proving.  To the author's knowledge, no absolute proof of the consistency of PA has yet been devised.

Nikolay Milkov
Reichenbach’s Method of Analysis of Science
Reichenbach’s Method of Analysis of Science (die wissenschaftsanalytische Methode), launched in 1920, aimed to replace Kant’s analysis of reason with analysis of the new scientific theories. The task of the “analyst of science” is to set out the true (“objective”) foundations of scientific theories, eliminating any subjective element in which they were initially formulated. We are going to show that Reichenbach’s method was not thus anti-Kantian as he claimed. In particular, his fundamental “definitions of correspondence” can be comfortably interpreted as a successor of Kant’s dictum that all theoretical sciences are based on synthetic a priori judgments.

Dan Nesher
Is There Scientific Genius? Criticism of the Kantian Dichotomy between an Artist–genius’s Productive Imagination Freely Creating Fine Arts and a Scientist Following Mechanical Rules That Determine the Formation of Theories
. . . genius is a talent of art, not of science, where we must start from distinctly known rules that determine the procedure we must use in it (Kant, Critique of Judgment:317).

I.   Introduction: Kant’s Epistemic Sources of the Dichotomy Between Art and Science in the Distinction Between Theoretical [Logical] and Aesthetic [Reflective] Judgments.

II.  Kant’s Dichotomy Between Artistic Aesthetic Reflection and Scientistic Theoretical Formalism is Elaborating in the Disputing Myths of the Phenomenological-hermeneutic “Artism” and the Analytic Formal Semantic “Scientism.”

III.  The Kantian Dichotomy between Artistic-genius Productive Imagination in Creating Fine Arts and Scientist’s Mechanical Rule in Formation Theories is Due to the Missing of the Function of Abductive Logic in Discovering New Aesthetic and Scientific Ideas.

IV.  The Pragmaticist Criticism of the Kantian Epistemology and of the Followed Tradition of the Conception of Genius: The Spinozist Conception of Imaginative Free Play though According to Epistemic Logic Rules.

V. The Difference Between an Artistic-genius’s Creation of New Exemplary Artworks and the Scientist-genius’s Discovery of a New Theoretical Point of View is in the Distinctive functions of their “Productive Imaginations” in Constructing Their Distinctive Modes of Representing Reality.

VI.  The Function of “Productive Imagination” in the Scientific Discovery of a New Synthesis in Reconstructing the Available Scientific Knowledge into a Hypothesis to Represent Reality: The Revolutionary Function of Thought Experiments in Exposing or Displaying the New Hypotheses, Darwin and Einstein.

VII.  Free Creation and the Conceptions of the Artist and Scientist Geniuses as a Combination of Intensive Experience in the Field and the Uncompromisable Character Devoted to New Discoveries and Revolutionizing it. 

VIII.  Criteria of Creativity: Degrees of Creativity in Art and Science as Opposed to Kant’s Distinction between Artistic Genius as Exemplar “Genial Talent” and Ordinary “Imitative Talent,” and Kuhn’s Distinction between “Revolutionary”and “Normal” Scientists.

IX.   Conclusion: The Creative Works of Geniuses in Fine Art and Science Are Different Modes of Representing Reality: “Aesthetically” and “Logically.”

John D. Norton
How Hume and Mach Helped Einstein Find Special Relativity
In recounting his discovery of special relativity, Einstein recalled a debt to the philosophical writings of Hume and Mach. I review the path Einstein took to special relativity and urge that, at a critical juncture, he was aided decisively not by any specific doctrine of space and time, but by a general account of concepts that Einstein found in Hume and Mach’s writings. That account required that concepts, used to represent the physical, must be properly grounded in experience. In so far as they extended beyond that grounding, they were fictional and to be abjured (Mach) or at best tolerated (Hume).

Einstein drew a different moral. These fictional concepts revealed an arbitrariness in our physical theorizing and may still be introduced through freely chosen definitions, as long as these definitions do not commit us to false presumptions. After years of failed efforts to conform electrodynamics to the principle of relativity and with his frustration mounting, Einstein applied this account to the concept of simultaneity. The resulting definition of simultaneity provided the reconceptualization that solved the problem in electrodynamics and led directly to the special theory of relativity.

Antigone M. Nounou
From Objects to Structure and Back
Pleased with epistemic structural realism’s responses to the problems of ontological discontinuity and pessimistic meta-induction, but discontented with the idea of epistemologically inaccessible objects whose nature remains problematic, ontic structural realists go as far as to eliminate objects from their account and assert that structure is not only all there is to know, but, in fact, structure is all there really is. At least two questions arise, though. If we accepted the ontic structural realists’ aphorism, could we identify the “real structure” in the context of high-energy physics? And if we did, could we retrieve the objects of the usual scientific discourse?

Wendy Parker
What Does It Mean When Climate Models Agree?
In light of broad scientific consensus regarding the detection and attribution of climate change, decision makers in government and industry are seeking trustworthy quantitative information about how climate will change over the next century. Climate scientists are aiming to provide this information using collections or "ensembles" of computer simulation models of the climate system. For some variables and lead times, the models in climate ensembles are in broad agreement in their predictions. Is there something special about such "robust" results? Should they be considered particularly trustworthy? I argue that although robust modeling results can, in conjunction with one or more assumptions about model adequacy, warrant belief in scientific hypotheses of interest, there is not yet good reason to think those adequacy conditions are met when it comes to predictions of interest from today's ensembles. This in no way implies that ensemble climate prediction studies are not valuable tools for understanding and decision support, for reasons to be discussed.

Laura Perini
Images and Abstraction in Biology
Two issues—abstraction in science, and the use of diagrams in biology—have drawn the attention of philosophers of science.   They have not, however, been explicitly connected: little is known about how visual representations, including visually abstract figures like diagrams, are involved in scientific abstraction.   I examine how abstraction is involved in reasoning with visual representations from contemporary biology, by looking at cases involving both abstract styles (diagrams) as well as non-diagrammatic images from the opposite end of the style spectrum, such as photographs and electron micrographs.  The results show that images are indeed involved in abstractive reasoning, and reveal a variety of ways in figures contribute to abstraction in science.


Jessica Pfeifer
Lewis, Laws, and Experimentation
Humean accounts of laws have long drawn criticism for failing to account adequately for unrealized possibilities.  This paper focuses on Woodward’s recent criticism of Lewis’s Best Systems Account for making laws dependent on whether or not we perform certain experiments, which he briefly discusses in Making Things Happen.  In the paper, I develop Woodward’s argument further, discuss how Lewis might respond to various versions of the argument, and consider whether these responses are successful.  I argue that certain versions of the argument merely highlight counterintuitive views that Lewis already acknowledges in other contexts, while other versions raise serious doubts about the tenability of Lewis’s account of laws.

Joseph C. Pitt
The Role of Metaphor in the Discussion of Nanotechnology
New Ideas can be troubling in a variety of ways.  In many cases we understand that they pose a challenge to the status quo, but we are not really sure what that challenge is.  Nor do we know how to judge the claims, positive and negative, made by proponents and detractors. The proponents of new ideas have often employed metaphors to make the unfamiliar appear less threatening.  Metaphors are used to explain the unfamiliar by appealing to the familiar and drawing connections between them in ways thought to illuminate and demystify the unfamiliar.  In this presentation, I look at the use of metaphor in two different oddly similar cases to see if they are successful in explaining the new and making us feel better about it. The first case is Galileo’s appeal to both geometry and the Venetian water barges to develop a theory of the tides that in turn can only be explained by appealing to the motion of the earth.  The second is Richard Smalley’s attack on the feasibility of nanobots.  As we shall see both arguments are striking similar in form. We know Galileo’s argument fails – but what about Smalley’s?

Demetris P. Portides
Idealization and Abstraction in Scientific Modeling
In this paper I explicate how the de-idealization, or concretization, processes operate in the construction of both theory-driven and phenomenological models. I contrast this approach to the structuralist construal of de-idealization, and thus to the idea that models represent by isomorphisms or partial-isomorphisms, by virtue of the fact that they are mathematical structures. I also argue that the model of de-idealization I propose is compatible with the conception of scientific models as partially-autonomous entities that result from a complex amalgamation of theory together with conceptual ingredients deriving from auxiliaries and semi-empirical results, that cannot be clearly distinguished within the representational device, i.e. the model. 

Hernán Pringe
Bohrian Complementarity and Kantian Teleology
We argue that the consideration of the analogies that Bohr draws concerning the epistemological situation in biology and in physics enables a better understanding of Bohr's interpretation of quantum theory. In particular, we show the Kantian character of Bohr's views on biology. On this basis, we assess what we can learn about complementarity in the quantum realm from a consideration of Kant's views on teleology in biology.

Athanasios Raftopoulos
Ambiguous Figures in Philosophy of Psychology
Ambiguous figures present a challenge to philosophy of Psychology. Macpherson (2006) argues that the square/regular diamond figure threatens representationalism, which holds that the phenomenal character of experience is either identical, or supervenes on, the nonconceptual content of experience (NCC). Her argument is that representationalism is committed to the thesis that differences in the phenomenal experience of ambiguous figures should be explained by differences in the NCC of perception. However, with respect to the square/regular diamond figure such differences in do not exist. The aim of this paper is to examine this challenge and show that representationalism can account for the experience of ambiguous figures.

Nicholas Rescher
From the Spree to the Ohio: Philosophy of Science Crosses the Atlantic
My talk will give a short account of how Pittsburgh’s philosophy of science effort is indebted to and arises from the “ School of Berlin.”

 

Victor Rodriguez
Interactions Between Simulation and Measurement
In the last decades, computational simulations have changed the experimental activity in several scientific disciplines. On the other side, high precision measurements are continuously producing new ways of thinking about different topics into diverse areas research, particularly, in physics and related subjects. Both practices are having remarkable influence on the design of new experiments, and in the emergence of new concepts and methods. Both have also very complex features today. It is argued in this paper that they are showing interesting ways of contact between them, and that this peculiarity deserves further analyses from the point of view of the philosophy of science. In this paper some steps in this direction are presented.

Stéphanie Ruphy
Are Stellar Kinds Natural Kinds? A Challenging Newcomer in the Monism/Pluralism and Realism/Antirealism Debates
Stars are conspicuously absent from reflections on natural kinds and classifications, with gold, tiger, jade, and water getting all the philosophical attention. This is too bad for, as this paper will demonstrate, interesting philosophical lessons can be drawn from stellar taxonomy as regards two central, on-going debates about natural kinds, to wit, the monism/pluralism debate and the realism/antirealism debate. I show in particular that stellar kinds will not please the essentialist monist, nor for that matter will it please the pluralist embracing promiscuous realism à la Dupré. I conclude on a more general note by questioning the relationship between taxonomic scientific practice and philosophical doctrines of natural kinds. 

Edward Slowik
Newton’s Metaphysics of Space: A “Tertium Quid” Betwixt Substantivalism and Relationism, or Merely a “God of the (Rational Mechanical) Gaps”?
This presentation investigates the question of, and the degree to which, Newton’s theory of space constitutes a third-way between the traditional substantivalist and relationist ontologies, i.e., that Newton judged that space is neither a type of substance/entity nor purely a relation among such substances. A non-substantivalist reading of Newton has been famously defended by Howard Stein, among others; but, as will be demonstrated, these claims are problematic on various grounds, especially as regards Newton’s alleged rejection of the traditional substance/accident dichotomy concerning space. Nevertheless, our analysis of the metaphysical foundations of Newton’s spatial theory will strive to uncover its unique and innovative characteristics, most notably, the distinctive role that Newton’s “immaterialist” spatial ontology plays in his dynamics.

Daniel Steel
Bayes Nets and Nanotech: A Chain Graph Framework for Extrapolation
My recent (2008) book, Across the Boundaries: Extrapolation in Biology and Social Science, develops a mechanisms-based account of extrapolation that relies on a notion called comparative process tracing.  Comparative process tracing is a strategy for justifying a model as a basis for extrapolation when available information about the target mechanism is limited.  Two strategies for minimizing the required information about the mechanism in the target are proposed.  First, it is not necessary to compare the model and target mechanisms at points where, given background knowledge, similarities are very probable.  Secondly, it may be possible to restrict attention to later, or downstream, stages of the mechanism upon which any relevant upstream differences must leave their mark.  This essay extends and further develops these ideas by means of a framework based on chain graph.  In a chain graph, nodes may be linked by single-headed arrows or by lines without arrow heads (so-called “undirected edges”).  Undirected edges are useful for representing symmetrical relationships, and are used here to represent likely similarities between the model and target mechanisms.  The nodes in these chain graphs represent the presence or absence of a causal link in either the model or target.  Finally, an arrow from one node to second indicates that the causal relationship in the first is potentially a mediating link in the second.  Given this set up, extrapolation is closely analogous to Bayesian network accounts of diagnosis.  In diagnosis, one assumes a known directed acyclic graph (DAG) with an associated probability distribution, and an individual for whom some but not all of the values of the variables are known.  In such contexts, it is possible to compute a posterior probability distribution over the unobserved variables given the values of those that have been observed.  Likewise, extrapolation would begin with a chain graph and an associated probability distribution, along with knowledge of some of the causal relationships represented by the nodes of graph.  From this, one would compute a probability distribution over unknown causal relationships given the known ones.  This essay presents the chain graph framework for extrapolation and explains how it explicates and extends comparative process tracing using an example drawn from nanotoxicology, a newly emerging field in which extrapolation is an especially pressing methodological problem.

Neil Tennant
Core Logic
Core logic consists of only the introduction and elimination rules for the logical operators, with conditions for discharge of assumptions designed to ensure relevance of premises to conclusions. Core proofs are in normal form; they are both relevant and constructive (intuitionistic). It will be shown how core logic provides all the transitivity needed for the methodological demands placed on logic, and how it relates to both intuitionistic and classical logic.

Barbara Tuchańska
Historical Contingency and the Stability of Demarcation. From Protoscience to Pseudoscience: The Case of Astrology
The problem of the historical nature of the criteria for demarcating science from protoscience and pseudoscience is discussed with astrology as an example. At the beginning of the 17th century astrology was a protoscience but it failed to become a science because it remained outside four important processes which constitute the raise of science: the ‘disenchantment’ of nature, the move from ‘mathematics of the occult’ to ‘mathematics of measurement and solving equations’, the establishment of the natural sciences as the basis of the applied sciences, and the separation of the natural from the human sciences. In the 20th century astrology counts as a pseudoscience, despite the fact that scientific standards have been dramatically weakened.

C. Kenneth Waters
Beyond Theoretical Reduction and Layer-Cake Antireduction: How DNA Retooled Genetics and Transformed Biological Practice
Watson and Crick’s discovery of the structure of DNA led to developments that transformed many biological sciences. But what were the relevant developments and how did they transform biology? Philosophical discussion concerning this question can be organized around two opposing views: theoretical reductionism and layer-cake antireductionism. I challenge both views by arguing that (a) biology is not organized into separate levels and (b) biology is driven by investigative practices. I then show how a recasting of the basic theory of genetics made it possible to retool the methodologies of genetics. It was the investigative power of these retooled methodologies, not the explanatory power of a gene-based theory, that transformed biology. 

Brad Wilson
On the Priority of Models in Ecology
The relationship between theories and models in science has been a vexing problem for some time.  Much of the discussion of the relation between theories and models has taken place against the background of physics (Cartwright, van Fraassen, Giere, Morrison).  At the other end of the scientific spectrum, in ecology, similar issues arise in regard to understanding what is a theory and what are the relations between theories and its models.  I want to focus on mathematical models in population ecology to help to understand the role of theories in ecology.  I will argue that the concept of a theory plays a minor role in ecology, primarily as a way of referring informally to a related set of ecological concepts that are expressed in mathematical models.

Jan Wolenski
Philosophy of Science in Poland in the 20th Century
Philosophy of science played a prominent role in Polish philosophical thought in the 20th century. Most scholars doing philosophy of science in the years 1900-1939 belonged to the Lvov-Warsaw School, but such partisans, like Ludwik Fleck, Joachim Metalmann and Leon Chwistek  should be also mentioned. Jan Łukasiewicz (before 1920), Kazimierz Ajdukiewicz, Tadeusz Kotarbiński, Zygmunt Zawirski, Tadeusz Czeżowski, Henryk Melhberg, Dina Sztejnabrg (Janina Kotarbińska), Edward Poznański, Aleksander Wundeheiler Janina Hosiasson and Maria Kokoszyńska were leading philosophers of science in the Lvov-Warsaw School; also Alfred  Tarski did some work in this field, although his main achievements belong to mathematical logic. These philosophers did not create a common general theory of science, but rather concentrated on particular problems. The spectrum of topics was very wide and comprised the classification of scientific reasoning, the structure of theories, justification, in particular, induction and philosophical problems of particular sciences natural as well as the humanities. Due to the Polish school of logic, formal methods were used extensively. Most approaches were realistic, but the contrary view was advanced by Fleck and Chwistek. Although Polish philosophy was dominated by Marxism after 1945, it largely preserved its pluralistic character. Thus, philosophy of science in this period was continuation of the earlier work done in Poland.

António Zilhão
What Is There in Common Between Probability Matching and the Sunk Cost Effect?
What is there in common between probability matching and the sunk cost effect? There's a trivial answer to this question. Both are typically viewed as distinctive forms of human irrational behaviour. In my paper, I'll contend that, beside this trivial commonality, there is another, far less trivial, one. I'll contend, namely, that they are both manifestations of the presence of deeply seated heuristics in our cognitive make up, the rationale of which makes, under the appropriate circumstances, good evolutionary sense. I'll contend further that a correct understanding of these cognitive phenomena provides us with the key to solve an age long philosophical problem - the problem of akrasia.

::: back to program

 
Revised 7/17/08 - Copyright 2006