home
   ::: about
   ::: news
   ::: links
   ::: giving
   ::: contact

events
   ::: calendar
   ::: lunchtime
   ::: annual lecture series
   ::: conferences

people
   ::: visiting fellows 
   
::: postdoc fellows
   ::: resident fellows
   ::: associates

joining
   ::: visiting fellowships
   ::: postdoc fellowships
   ::: resident fellowships
   ::: associateships

being here
   ::: visiting
   ::: the last donut
   ::: photo album


::: center home >> events >> conferences 2009-10 >> pitt-paris II >> abstracts

Pitt-Paris II
Emergence and Reduction in the Sciences

Abstracts

Kenneth Aizawa, Centenary College of Louisiana  
The Autonomy of Psychology in the Age of Neuroscience

Suppose that scientists discover a high level property that is prima facie multiply realized by two distinct sets of lower level properties.  In response to this situation, they could decide to take things at face value and conclude that the higher level property is in fact so multiply realized.  Another response, however, would be for scientists to reject the existence of the multiply realized property and postulate instead two subtypes of the higher level property and say that each is uniquely realized.  By adopting this latter course, it is possible for scientists to block the multiple realization of a property by sub-typing the property.
Clearly these are two logically possible responses to this type of situation, so when it actually arises how do scientists respond?  This paper will address this question through a case study of the components of the eye that realize normal human color vision.  Three distinct sets of lower level properties will be discussed: the optical density of the crystalline lens and macular pigment, the properties of the photoreceptors, and the properties of the phototransduction biochemical cascade.  Although there are clear cases where scientists subtype higher level properties in terms of their lower level realizers, the discovery of the foregoing properties are clear cases where they do not.  Normal color vision is not rejected and is not subtyped.  
Rather than always sub-typing normal color vision by way of its realizers, vision scientists sometimes adopt two courses of action.  In some cases, such as those of the optical density of the lens and macular pigment and the differences in photoreceptors, scientists use the differences in realizers to explain individual differences.  In other cases, such as the properties of the phototransduction biochemical cascade, they do not associate the lower level properties with the higher level at all.  The properties of the biochemical cascade are in a sense “orthogonal” to the color discriminations one makes.  Thus, how scientist respond to the discovery of differences among lower level realizers depends on the structure of the higher level theory of which the realized properties are a part.  This, however, constitutes a kind of autonomy of psychology in the age of neuroscience.


Craig Callender, University of California, San Diego  
Jonathan Cohen, University of California, San Diego  
Special Sciences, Conspiracy and the Better Best System Account of Lawhood

An important obstacle to lawhood in the special sciences is the worry that such laws would require metaphysically extravagant conspiracies among fundamental particles. If, for example, Malthus's Law of ecology really is a law, and if the rabbits whose reproduction rates it describes in some way supervene on more fundamental kinds, then it can seem highly mysterious that the fundamental rabbit-constituting objects end up moving, projectibly, in ways consistent with Malthus's Law.How, short of conspiracy, is this possible?

In this paper we'll review a number of strategies for solving the conspiracy problem --- i.e., for allowing for the projectibility of special science generalizations without positing outlandish conspiracies. Some of these strategies turn on accepting accounts of laws (e.g., non-Humean pluralist and classical MRL theories) on which the problem of conspiracy cannot arise in the first place.
Alas, we'll argue, these accounts do less than we should want a theory of laws to do, and so should be rejected. Next we'll take on a recent strategy, due to Albert and Loewer, that aims to solve the conspiracy problem by treating special science
regularities as probabilistic corrolaries of statistical postulates
over low-level initial conditions. We'll argue that, while this view may obviate conspiracy, it rests on unrealistic and insufficiently general assumptions about the relation between special and fundamental sciences.

Finally, we'll consider the conspiracy problem through the lens of our preferred view of laws, an elaboration of the MRL view that we call the Better Best System (BBS) theory. BBS offers a picture on which, although all events supervene on a
fundamental level, there is no one unique locus of projectibility; rather there are a large number of loci corresponding to the different areas (ecology, economics, solid-state chemistry, etc.) in which there are simple and strong generalizations to be made. While we expect that some amount of conspiracy-fear-inducing special
science projectibility is inevitable given BBS, we'll argue that this is unobjectionable. It follows from BBS that the laws of any particular special or fundamental science amount to a proper subset of the laws. From this vantage point, the existence of projectible special science generalizations not guaranteed by the fundamental laws is not an occasion for conspiracy fantasies, but a predictable fact of life in a complex world.


Jonathan Davies
, Fondazione Bruno Kessler  
Calculating and Constructing Emergence from the Bottom Up in Systems and Synthetic Biology  

Recent philosophical discussions of emergence –  prompted, at least in part, by developments in biology –  have attempted to formulate notions that are compatible with mechanistic explanations for complex phenomena. Objections to accounts of emergence can be based on a number of considerations, ranging from their triviality to the mysteriousness of emergent phenomena or the anti- scientific character of the explanations. Weaker versions do not seem to account for ontological novelty, whereas stronger versions seem to rule out the possibility of explaining the mergent 
phenomena and its relationship to its “lower-level” components. I will endorse a version of emergence, based on scope, scale and resolution1, and argue that understanding emergence in this way not only avoids some of the more worrisome metaphysical problems associated with levels-based ontologies but also allows us to understand how strong (ontological) emergence can be mechanistically explained and even engineered. This approach focuses on the spatio-temporal distribution of causal factors in the production of an emergent property or entity. I will apply this analysis to the in silico modelling and the synthesis of biological systems in systems and synthetic biology (SB and synbio).  SB is often classified into two distinct streams. Bottom-up, or pragmatic, SB which emphasises the large-scale modelling of molecular interactions based on data from genomics and molecular biology, and the systems-theoretic stream, which concentrates on the use of organisational principles in the modelling of biological systems2. The latter has been identified with the attempt to model emergent, system-level properties and the former is often thought to be continuous with the reductionistic approaches of molecular biology and genetics. I will argue, however, that bottom-up SB, although rooted in molecular biology, is not wedded to a reductionist metaphysics. Explanations in SB – bottom-  
up as well as top-down –  focus on the (often non-linear) interactions between multiple components that combine to produce in silico systemic behaviours or properties that could not be predicted from an understanding of the intrinsic (i.e. context independent) properties of the components. This amounts, I argue, to the mechanistic explanation of strongly emergent properties. Synbio is usually identified with the biobrick group at MIT. This group emphasises the effort to create functionally discrete, modular and interchangeable components made from engineered gene circuits. However other synbio practitioners focus much more on cellular context and the self- organizing capacities of cell components3. I will argue that, while the biobrick approach attempts to do away with emergent properties in the attempt to reliably construct biological systems, efforts are underway to formulate engineering principles and techniques that can accommodate emergent  
properties that arise from the heterogeneity, complexity and non-linear interactions of components that underlie biological systems. These efforts, I argue, amount to attempts to engineer emergent entities and properties. There is genuine ontological novelty arising out of the interactions between the components in organisms and any successful model, or synthesis, of biological systems must capture this novelty.  
Although in their infancy, SB and synbio are working towards mechanistically explicable emergent properties and entities which are being generated from the bottom-up.  

Indicative Bibliography  
1   Ryan A.J. (2007) “Emergence is coupled to scope, not level.” Complexity 13 (2) DOI  
10.1002/cplx.20203  
2   O’Malley M.A. & Dupré J. (2005) “Fundamental Issues in Systems Biology” BioEssays 27, 1270-1276  
3   O’Malley M.A., Powell A., Davies J.F., Calvert J. (2008) “Knowledge-making distinctions in synthetic  
biology” BioEssays 30, 57-65


Jacques Dubucs
, Institut d'Histoire et de Philosophie des Sciences et des Techniques, Paris  
Downhill Synthesis?

One quarter of century ago, in his influential book Vehicles (1984), V. Braitenberg proposed a series of agents or “vehicles” of increasing complexity. These artefacts, actually some sorts of quite rudimentary behavior neural networks coupling motors and sensors, behave in a more and more complex way, so that an observer can hardly refrain, in the end of the series, from saying that they behave as if they were motivated by various feelings as aggressiveness, fear, love or hate. From these thought experiments, Braitenberg draws his famous “law of uphill analysis and downhill synthesis”: It is much difficult to start from the outside and try to guess internal structure just from the observation of the data (...) Analysis is more difficult than invention in the sense in which, generally, induction takes more time to perform than deduction: in induction one has to search for the way, whereas in deduction one follows a straightforward path. A psychological consequence of this is the following: when we analyze a mechanism, we tend to overestimate its complexity. [Braitenberg, 1984, p. 23] I will argue that Braitenberg’s law is ambiguous between

(i) Simple mechanisms are enough to generate complex behavior

(ii) To know the simple mechanisms a given system consists of is enough to deduce or anticipate the behavior of the system

The first thesis is correct, but not the second one, for there are simple systems such that one cannot do better, to anticipate their behavior, than to follow or simulate their evolution step by step. I will try to give a rigorous sense to the idea that, in such situations, no "jump" to remote future states of the system is possible.

[Braitenberg, 1984] Vittorino Braitenberg. Vehicles. Experiments in Synthetic
Psychology
. Bradford Books. MIT Press, 1984.


Laura Franklin-Hall, New York University - CANCELLED
Emergent Dynamics in Developmental Systems  

This paper argues that an examination of computational models used in contemporary developmental biology can shed light on classic philosophical questions about reductionism and emergence.  I suggest that explanations drawn from such models, despite frequent reference to the molecular level, do not vindicate reductionism.  Instead, they show that developmental systems are emergent, but ironically, emergent in a way that is only apparent once all the “lower level” molecular details are uncovered.    

Traditional explanations in developmental biology called upon mysterious, non-physical entities like “fields” or “final ends.”  These explanations have rightly been replaced:  fields have been traded in for molecular gradients, final ends have been naturalized or banished altogether.  These new models, some of which purport to “compute the embryo,” appear to be exactly what philosophical reductionists have long anticipated: accounts of development that appeal only to molecular concentrations and rules governing their transformations.  

But on closer examination, we find something surprising: parameters that govern molecular interactions are often omitted from the computational models, including parameters that seem crucial to understanding network dynamics (e.g., the binding strengths of molecules and kinetic properties of reactions), and descriptions are highly abstract.  And yet, despite these omissions and abstractions, the models are very predictive and explanatory.  Is this a sign that reductionism is wrong?

To avoid that conclusion, philosophers have proposed two reasons why our best models might omit important lower level details.  First, explanatory protagoreans hold that detail omission is to be explained by our own cognitive limitations: if all the details were included, we couldn’t understand anything.  Second, explanatory provisionalists see detail omission as an artifact of an incomplete science: ultimately all “black boxes” will be opened up and the molecular parameters introduced.

I offer an alternative explanation as to why these models can be successful while leaving out many important molecular parameters.  I argue that, depending on the kinds of control structures responsible for a developmental process, either more or less detail will be needed to explain that process. We’ll see that the temptation to label biological processes emergent springs from the fact that biological control structures, such as positive feedback loops, are commonly digital, even though the underlying chemistry is analogue.  For this reason, piling on details in developmental explanations yields decreasing marginal returns – or even none at all.  I sketch an account of the appropriate grain for an explanation which allows us to locate discontinuities in the relationship between explanans and explanandum. These discontinuities give us a way of identifying objectively autonomous – or perhaps emergent – levels in nature.    

I conclude by relating my account of proper explanatory grain to other philosophical defenses of abstraction in scientific explanation, particularly the multiple-realizability (MR) approach.  I’ll show that my view captures what is right about MR, while not suffering from MR’s major shortcoming:  the inability to account for why a particular level of abstraction provides the best level for explanation.


Doreen Fraser, University of Waterloo  
Idealization and Renormalization Group Methods in Statistical Mechanics and Quantum Field Theory

Bob Batterman has argued that thermodynamic properties furnish a genuine example of emergent properties. This example has the nice feature that the relationship between the theories in question—thermodynamics (TD) and statistical mechanics (SM)—is known and can be clearly expressed in mathematical terms. Renormalization group (RG) methods are the mathematical formalism that is employed for this purpose; in general, this formalism is well-suited to the task of discerning cases of emergence and reduction because it relates theories at different scales. RG methods have also found applications in other areas of physics, most prominently in particle physics (i.e., quantum field theory). In the context of quantum field theory (QFT), the higher level theory (i.e., analogue of TD) is an “effective” QFT incorporating short distance cutoffs that, to a very good approximation, is empirically adequate at some low energy scale; the lower level theory (i.e., analogue of SM) is a continuum QFT without short-distance cutoffs that applies to arbitrarily high energy scales. Physicists and philosophers have claimed that effective QFTs describe emergent properties. This raises the question of whether an analogue of Batterman’s argument can be run for QFT. I will argue that the QFT analogue of Batterman’s argument does not go through. This question is of general interest to the study of emergence and reduction because it sheds light on the issue of whether the formal relationship between theories is sufficient to determine if a given case is one of emergence or reduction or whether the interpretation of the formal relationship is also required.

Batterman’s central claim is that “genuinely emergent properties, as opposed to ‘merely’ resultant properties depend on the existence of physical singularities” (The Devil in the Details, p. 125). In the TD-SM case, phase transitions are marked by physical discontinuities, which are represented mathematically by singularities. The application of RG methods to phase transitions requires the idealization of taking the thermodynamic limit, which entails that the volume of the system under consideration is infinite. The payoff is that the fixed points of the renormalization group predict and explain the occurrence of phase transitions. Singularities are also a feature of the QFT case. However, I will argue that the analogue of Batterman’s argument for QFT does not go through because both the explanatory goals and the idealizations invoked differ from the TD-SM case. In particular, in QFT the infinite limit is presumed not to be an idealization, since quantum fields are presumed to be defined over continuous, infinite space and have an infinite number of degrees of freedom.


Carl Gillett, Northern Illinois University  
A Whole Lot More from 'Nothing But': The Possibility of Strong Emergence and the Deeper Issues in the Sciences and Philosophy

Large numbers of scientific researchers continue to be ontological ‘reductionists’, but in philosophy ‘reductionism’ is still widely assumed to be a dead position. And, despite ever increasing numbers of self-proclaimed ‘emergentist’ scientists, most philosophers dismiss ‘emergentism’ as at best confused or at worst incoherent. One might not think this disconnect between scientists and philosophers is so problematic, but I intend to argue that it has damaged the scientific debates, leaving them without crucial theoretical aid, and has left philosophers missing the deeper issues, and the strongest positions, about the overall structure of nature and the sciences. My strategy in illuminating these claims is to vary our approach in two ways. First, I address the issues through a pursuit of the ‘metaphysics of science’, that is the abstract examination of ontological issues as they arise within the sciences and real scientific cases. And, second, I use this stance to follow the lead of the scientists to explore the arguments and positions these writers, in a variety of disciplines, have recently articulated about ‘reduction’ and ‘emergence’.
Scientific reductionists and scientific emergentists have recently clashed over a range of issues, but I contend that at the heart of their battles is a dispute over compositional concepts in the sciences and what they entail. On one side, scientific reductionists like physicist Steven Weinberg (1994) argue that scientific composition means ‘Wholes are nothing but their parts’. Whilst on the other side, scientific emergentists, such as physicists Philip Anderson (1972) or Robert Laughlin (2005), chemists like Ilya Prigogine (1995), and others, all claim that ‘Wholes are more than the sum of their parts’ and, just as importantly, that ‘Parts behave differently in wholes’. Building on these core ontological claims, both groups advance strikingly different accounts of the fundamental laws, the relations and importance of various sciences, the nature of scientific methodology and the “frontiers of science”.
Whether either side’s slogans make sense is an open question. But by looking at the scientific cases and explanations that drive such claims, I show both sides offer important, and philosophically over-looked, positions. I sketch the “compromising” ontological reduction espoused by writers like Weinberg noting how it avoids philosophical critiques of ‘reduction’. But I focus primarily on showing that philosophers have missed a prominent form of scientific emergentism that allows us to make sense of the emergentist’s twin slogans. Drawing out the ideas of writers like Laughlin, I show that they defend a novel view of aggregation which reveals an over-looked form of determination and also establishes the possibility of ‘Strong’ emergent entities, i.e. composed entities that are none the less still efficacious in nature. I conclude the paper by showing how our work establishes that philosophers have embraced a false dichotomy about the available positions when we really have a trichotomy of options. I also note the deeper issues, about aggregation, the varieties of determination, and the character of components, that instead plausibly underpin debates in the sciences and philosophy.


Rafaela Hillerbrand, RWTH Aachen University  
Two Problems with Nagelian Type Reductions. How Thermodynamics May Not Be Reduced to Statistical Mechanics  

Addressing the question as to how one theory reduces to another reveals on closer inspection a plethora of unsettled questions. Likewise, all examples of succesful reduction mentioned in the literature on history of science  have been subjected to heavy doubts as to whether they indeed fulfill the criteria of reduction. These criteria are often equated with the ones given by E. Nagel (1974). I follow this notion and identify (intertheoretic) reduction roughly with Nagelian reduction.  

It has been shown how less formal sciences fail to meet Nagel’s criteria. In this paper it is argued that not even highly mathematizised sciences fulfill some (weak) Nagelian criteria. Here I take the alleged reduction of thermodynamics to statistical mechanics as a study case because, despite various criticisms, this merging of two theories remains the paradigm of successful reduction of a perceived henomenological to a microscopic theory. By choosing a highly mathematized science like physics, I hope to provide arguments that can be carried over to other, less formal sciences in a straightforward way.  

In particular, I want to point to two omissions of the classical account on intertheoretic reduction: Firstly, this paper contends that it is not a particular theory t that is reduced to another theory T: not theories reduce or become reduced; rather a concrete model of T can be related to a model of t in such a way that the connection between these models qualifies as a reduction. Only for concrete models does the notion of reduction make sense. This holds even for fields like physics where established theories exist.

Secondly, the common view on reduction focuses on different descriptive entities appearing in the mathematical formulation of the theories t and T. For inhomogenous reductions, these entities – the theories' furniture of the world, observational vocabulary stated in theoretical terms like temperature or pressure that take on specific numerical values – are correlated via so-called correspondence (or bridge) principles. I argue that directing one's attention solely to this type of descriptive vocabulary is not sufficient for a satisfactory reduction. Rather, correspondence rules are needed to bridge (part of) the formal relational properties of the theories involved as well. As C. Pincock (2007) notes, these properties belong to the broader class of abstract explanation appealing primarily to the ``formal relational features of a physical system''. I take the concept of quasistatic changes in thermodynamics as such a relational property of the system under consideraton. If thermodynamics indeed reduces to statistical mechnaics, the concept of quasi-static changes has to be mapped (and indeed is mapped) on a corresponding `principle in statistical mechanics. This paper suggests a way of how to modify Nagelian type reduction in order to overcome the raised problems.


Philippe Huneman, Institut d'Histoire et de Philosophie des Sciences et des Techniques, Paris
Computational Emergence, Robustness and Realism: Considering Ecology and Its Neutral Models  

Among many properties likely to characterize emergence, such as novelty, irreducibility and unpredictability, computational accounts in terms of incompressibility (e.g. “weak emergence” sensu Bedau 2003) aim first at making sense of unpredictability. Those accounts proved to be more objective than usual accounts in terms of levels or mereology, which often face objections of being too epistemic (Huneman, 2008). While defending a computational account of emergence, the present paper distinguishes the objective unpredictability proper to the elements of a system - compatible with determinism and entailed by emergence - and the various possibilities of predictability at emergent levels. One main possibility consists in predicting emergent patterns on the basis of sets of values of global parameters. However, provided that criteria of emergence have been given in the context of modelling, one can always argue that the models so considered – in the context of which phenomena are said to be emergent - are not adequate enough, so that reality does not actually display emergence.

The second part of the paper addresses such “unnoticed inadequacy objection” by dealing with examples from ecology. Some authors (e.g. Sterelny 2006) cite emergent properties at the ecological level; I will begin to show that some of those properties are computationally emergent (rejoining Grantham 2007). The rest of the paper will consider cases from the neutral theory of ecology (Hubbell 2001), in order to argue that comparative analysis of models provides a way to test whether emergent properties identified by ecological modelling are indeed ontologically grounded.

When ecologists compare neutral models (where basically one assumes no selection acts among species) and niche-effect models, and find that predictions at the level of the communities are the same, they can claim that only neutral processes are indeed relevant to the community-level resulting pattern, whereas niche–effects are just noise. In this sense, those analyses allow one to pick out what are the relevant parameters upon which the community-level pattern mainly rely. The robustness of the neutral models, thus, means that even if the details of the interactions are not realistically represented, the genuinely relevant parameters have been correctly identified by such models. Whereas analytical models identify relevant causal entities through their variables, simulations in general may identify relevant causal parameters by such comparative analyses. The bulk of my argument against the “unnoticed inadequacy objection” is that when we get emergent patterns at the level of communities happening in a model, comparative robustness analysis (sensu Levins 1966; Weisberg 2006) can prove both that those properties are ontologically emergent (because the model is robust) and that we correctly picked out the relevant parameters, i.e. the equivalent of what would be variables in an equation. Neutral models are especially significant in this regard.

I conclude by considering how the recent wave of so-called neutral modelling (ecology, molecular biology etc.) in biology may bring interesting new materials to the controversies about emergence.

References

Bedau M. (2003). Downward Causation and the Autonomy of Weak Emergence. Principia, Revista Internacional de Epistemologia , 6, 5-50.

Grantham T. (2007) Is macroevolution more than successive rounds of microevolution? Palaeontology. 50, 1: 75–85.

Hubbell, S.P. (2001). The Unified Neutral Theory of Biodiversity and Biogeography. Princeton: Princeton University Press.

Huneman (2008) Combinatorial vs. Computational views of emergence. Emergence made ontological? Philosophy of science, 75, 595–607

Levins, R. (1966) The Strategy of Model Building in Population Biology.( In: E. Sober (ed.): Conceptual Issues in Evolutionary Biology. (pp. 18–27) Cambridge: MIT Press,1987).

Sterelny, K. (2006). “Local Ecological Communities.” Philosophy of Science 73: 215-231.

Weisberg M. (2006). Robustness analysis. Philosophy of science, 73, 730-742.


Cyrille Imbert, Archives Poincaré, Université Nancy 2/CNRS
Emergence and Inherently Sequential Processes

One feature of emergent phenomena is that they are supposed to be unpredictable. With this respect, two related problems are worth emphasizing: i) A more precise (and if necessary quantitative) description of what it is to be unpredictable is desirable; ii) If emergence is not to be a purely epistemic or theory-relative notion, it is required that this unpredictability can be shown to characterize intrinsically the target phenomena. Phenomena labeled as emergent are various and there may be different ways to tackle these two problems. So what can be expected from an answer is that it is as rigorous as possible, applies to some well-documented cases, is informative about the predictive difficulty that is met in these cases and captures some of our informal intuitions about when emergence is met.

Mark Bedau Bedau (1997, 2003) has offered such an answer when he defines a weakly emergent (or diachronically emergent) state as a macroscopic state which could be derived from the knowledge of the system’s microdynamics and external conditions but only by simulating it.

The definition is almost quantitative (since one can measure the computational effort required for simulations); Bedau presents some cases which, he argues, are emergent in this sense; more important, this notion of weak emergence captures the deep but informal idea that there is no other way of learning what the emergent behaviour of a system is than by observing the system (or a replicatum of it) evolve. Here the idea is that some processes are intrinsically sequantial in the sense that, in order to learn what their final step is, no shortcut can be found to predict their evolution and one needs to go through their detailed history, that is to say, the unfolding of all their successive states. It is also this idea (or a very similar one) that Bennett claims to have formalized with his notion of logical depth (Bennett, 1988), which is supposed to measure the degree of historicity of systems (such as biological systems, which are the results of a long evolution). In this talk, I first argue that i) Bennett’s logically rigorous definition of logical depth is not satisfactory to capture the notion of inherent sequentiality of processes; ii) Bedau’s definition, though more satisfactory as an explicans, is not rigorous because the notion of unpredictability except by simulation is informal (since the corresponding notion of simulation is left in the vague). As a consequence, Bedau cannot prove that the corresponding notion of unpredictability is not purely epistemic. When making predictions, there might always be, after all, a way to be smart and to short-cut Nature.

The core of my talk is then devoted to showing how the theory of computational complexity, and more precisely the difference between complexity classes NC and P and the theory of P -completeness provide a more satisfactory definition. It first provides a quantitative measure of unpredictability (question i above); it solves some serious definitional problems that cannot even be seen with Bedau’s definition; it can be shown to be intrinsic and not epistemic (question ii); it does apply to some well-documented physical systems and draws for this reason an interesting distinction between system whose behaviour can be predicted in logarithmic time and those inherently sequential systems (corresponding to P -complete problems) whose behaviour cannot be predicted in sublinear time and needs to be simulated all the way through.

Bibliography

Bedau, Mark. 1997. Weak emergence. Philosophical Perspectives : Mind, Causation and Word , 11 , 375–399.

Bedau, Mark. 2003. Downward causation and the autonomy of weak emergence. Principia , 6 , 5–50.

Bennett, Charles. 1988. Logical depth and Physical complexity. Pages 227–258 of: Oxford University Press, Robert Herken (éd.) (ed), The universal Turing machines, a half century survey . Robert Herken.


Alan Love, University of Minnesota  
Aspects of Reductive Explanations in Biological Science: Intrinsicality, Fundamentality and Temporality  

The inapplicability of variations on theory reduction in the context of genetics and their irrelevance to ongoing research has led to an anti-reductionist consensus in philosophy of biology.  An oft-cited reason for this ‘failure’ is the misappropriation of models of reductive reasoning derived from the physical sciences.  One response to this situation is to concentrate on forms of reductive explanation that correspond to actual scientific reasoning in biological science (e.g., part-whole relations or mechanisms ‘bottoming-out’).

Reductive explanations in biology often involve claims about both the composition of higher-level entities (wholes) by lower-level entities (parts), and the causal production of higher-level entities by lower-level entities.  The importance of keeping these two elements distinct has been overlooked because the ontological framing of many debates about reductionism involve concepts that primarily concern composition (e.g., identity, multiple realization, and supervenience).  This framing of reduction focuses on (i) relations that obtain between two properties (different in kind; e.g., mental and physical) of one and the same object or system, and (ii) relations that are fundamentally atemporal, obtaining strictly at a time slice.  To offer just one example, consider a well-known formulation of supervenience: ‘Mental properties supervene on physical properties, in that necessarily, for any mental property M, if any thing has M at time t, there exists a physical base (or subvenient) property P at t, and necessarily anything that has P at a time has M at that time.’  In other words, many discussions of reductive explanation assume intrinsicality and atemporality, which constrains our ability to understand the element of causation and differentiate the significance of composition from that of causation in how these explanations are structured.  

Once we keep composition and causation distinct, it enables us to explore three aspects of reductive explanations in a new light: fundamentality, temporality, and intrinsicality.  Combinations of these three aspects yield previously unrecognized forms of reductive explanation and conditions for their success or failure that are relevant in biology and other sciences.  In order to demonstrate the applicability and relevance of my analysis, I use it to illuminate the protein-folding problem, which lies at the juncture of molecular (structural) biology and macromolecular physics.  

Additionally, this analysis of reductive explanations provides other philosophical rewards.  It clarifies what is at stake in standard arguments against reductionism, such as the context objection.  It explains why some physics-derived models mischaracterize part-whole reductive explanations found in biological reasoning.  And, it demonstrates that a failure of one form of reductive explanation does not imply a failure of reductionism per se or a failure of explanation.  

Philosophical accounts of reductive explanations must be sensitive to the epistemological heterogeneity of actual scientific reasoning and aware of the biases implicit in the way analyses are framed.  I close by arguing that a similar lesson applies to ‘emergence,’ which has suffered under a similar situation in philosophical discussions of science.


Peter Machamer, University of Pittsburgh  
Misunderstandings of Reduction and Causality: Persons, Subpersonal States, and the Social  


Paul-Antoine Miquel, University of Nice  
Slobodan Perovic, University of Pittsburgh  
Neither Reduction Nor Emergence? The Gene's Actions And Reciprocal Causation

We discuss an account of biological facts that stems from an analogy between biological properties and entities and a general account of reductive physicalism predicated on the so- called explanatory/causal exclusion and causal closure principles. To philosophers of biology who accept this view (e.g., A. Rosenberg), the Central Dogma of molecular biology provides the necessary fundamental level of biological processes and implies an agreement on down-top causal structure.

In general, however, actual biological explanations fail to substantiate such an approach, as biological processes exhibit causal reciprocity at various levels. A level deemed causally fundamental by reductionists typically turns out to be critically entangled with other levels in biological explanations. We demonstrate this by critiquing A. Rosenberg’s recent attempt to explain a prime case of epigenetic inheritance by methylation, as a thoroughly DNA-governed affair - we see his effort as based on a mischaracterization of the function of chromatin.

We further demonstrate the obsolescence of establishing a rigid causal hierarchy (reductionist and at least some prominent emergentist accounts, e.g., P. Humphreys and Boogerd et al., alike) in top/down terms with respect to dynamic properties of biological processes. The case of ciliate protozoa shows how the laws of molecular kinematics that regulate metabolism and those responsible for protein formation are selected by independent initial conditions driven by natural selection. Also, the contingency upon independent initial conditions, of the laws regulating the binding of those amino-acids which determine the number of strands in the molecule of DNA, similarly demonstrates causal reciprocity.

Even though causal reciprocity seems critical to the explanatory power of biological explanations, one might well worry about the possibility of introducing a kind of vicious circle. Yet this will happen only if biological processes are understood in terms of their misleading qualitative analogy with causal processes in physics. This analogy, albeit widely used and assumed by reductionists, fails to take into account the reversibility of relevant physical and molecular processes. What seems to be a vicious circle under an erroneous assumption might turn out to be a kind of impredicativity inherent to biological explanations.

We conclude with a discussion of what causal reciprocity and impredicativity imply in terms of both an epistemological and an ontological understanding of biology.

Sandra Mitchell, University of Pittsburgh  
Dynamic Emergence  


Matteo Mossio, University of the Basque Country/Centre National de la Recherche Scientifique  
Leonardo Bich, University of Bergamo  
Alvaro Moreno,  University of the Basque Country  
Constraints and Biological Emergence

In our talk we will explore the theoretical role of constraints in the debate about emergence in biological systems. We will first advocate the view according to which a sound account of biological organisation implies the appeal to emergent levels of causation, and we will propose a theoretical justification against existing philosophical criticisms by interpreting emergent causal powers in terms of constraints. In particular, it will be our contention that the concept of constraint, interpreted as the causal power stemming from the relational properties of material configurations, offers a solid ground for the theoretical defence of emergence.  
Our main thesis will then be that biological systems crucially differ from other natural systems in the causal role of constitutive constraints. The first relevant transition from Physics to Biology occurs when a constraint is able to exert a causal action on some dynamics in such a way that, in turn, the constrained dynamics maintain (at least some of) the boundary conditions enabling the constraint to exist. When this occurs, the whole system is then, even if in a minimal sense, self-maintaining.  
Self-maintenance is a widespread phenomenon in nature, its most common example being the so-called ‘self-organizing’ or ‘dissipative’ structures. The peculiar characteristic of these phenomena is that, in different degrees, they produce the same constraints which act upon their generative dynamics (or dynamical configurations). In biological systems, self-maintenance assumes the more complex form of a mutual dependence between a set of constraints such that for each constraint Ci, (at least some of) the boundary conditions required for its maintenance are determined by the immediate action of another constraint Cj, whose maintenance depend in turn on Ci as an immediate constraint. We label this emerging causal regime organizational closure.  

The concept of organisational closure has two important theoretical implications for the debate on emergence and reduction. The first derives from the acknowledgment that those components which are relevant in order to describe the dynamics of biological systems, and which can be considered as constrained structures, exist only as far as they are involved in the organization. As a consequence, organizational closure entails a limitation in the possible operations of fractioning of the system and, therefore, in the attempt to provide reductionist descriptions of it. The second implication concerns the problem of downward causation. As we will argue, configurations of mutually dependent constraints do not involve inter-level causation, which would then constitute a heuristic, rather than theoretical, tool for biological explanation.


Nicholas Rescher, University of Pittsburgh  
Evolution and the Emergence of Intelligence

The talk I am proposing pivots on three key concepts: evolution, emergence, and intelligence. And it is predicated in three basic ideas:

(1) That evolution by natural selection affords the instrumentality through which intelligence emerges on the cosmic stage

(2) That once intelligent beings come into existence there is a fundamental and revolutionary change in the world's repertoire of natural processes in which there is involved not only the activities of intelligent beings but also operational settings in which they function.

(3) That evolution (cosmic as well as biological) itself provides a pathway to intelligent design in nature.

Accordingly, a doctrine of what might be called evolutionary naturalism makes room for the impact of intelligence upon the world's course of events.

One issue to be explored is whether and in which way (if any) the emergence of intelligence through evolution by natural selection is "reductive." (This issue intertwines both conceptual and substantively empirical considerations.)

The point of the paper is to argue that evolutionary naturalism, while seemingly reductive, is actually transformative in that the emergence of beings capable of intelligence and able to find grist to the mill of its development involves a transformative as well as its reductive aspect.


Kenneth Schaffner, University of Pittsburgh  
Reduction: Undead Again


Michael Silberstein, Elizabethtown College  
When Super-Theories Collide: A Brief History of the Emergence/Reduction Battles between Particle Physics and Condensed Matter Theory


John Symons, University of Texas at El Paso 
Network dynamics and computational models of emergence  

What does emergence mean in the context of networks and how can we usefully model emergent network behavior computationally?  In this paper, we argue that genuine emergence requires that networks be open.  How does one track the behavior of open networks in a computationally tractable manner? We propose an approach in which two networks are coupled and their interactions are examined for emergent features.  In our example, we study the interplay of spatial and social networks.  A multiagent system is proposed in which the spatial and social properties of agents effect one another.  The dynamic behavior of these coupled networks are then examined using information theoretic measures.  We propose that some features of coupled network dynamics can be understood as emergent insofar as they do not result from the rules of the networks in isolation.  So, for example, the notion of a territory is an emergent feature resulting from the interplay of social and spatial properties of agents.  A territory can be understood to modify both the spatial and social properties of agents.


Jessica Wilson , University of Toronto  
The Metaphysical Basis of Non-linear Emergence

Cases of non-linear phenomena are commonly cited as examples of physically acceptable ('Weak') emergence (Newman 1996, Bedau 1997 and 2002), but whether the notion of emergence at issue is metaphysical rather than (merely) epistemological has been unclear.  As I'll argue, Bedau's official account of Weak emergence does not guarantee that non-linear phenomena are either autonomous or compatible with Physicalism.  That said, certain of Bedau's remarks suggest an alternative and more promising way of understanding non-linear emergence as an appropriately metaphysical phenomenon.  I'll develop these remarks by appeal to an alternative account of Weak emergence, cashed in the first instance in terms of a relation between powers (neutrally understood) of higher- and lower-level entities, and in the second instance in terms of a relation between the degrees of freedom of these entities.

 
Revised 12/2/10 - Copyright 2009