home
   ::: about
   ::: news
   ::: links
   ::: giving
   ::: contact

events
   ::: calendar
   ::: lunchtime
   ::: annual lecture series
   ::: conferences

people
   ::: visiting fellows
   ::: resident fellows
   ::: associates

joining
   ::: visiting fellowships
   ::: resident fellowships
   ::: associateships

being here
   ::: visiting
   ::: the last donut
   ::: photo album


::: center home >> events >> conferences >> other >> 1998-99 >> values

Workshop on Values in Scientific Research

We invite visitors to this site to join us in the fruitful conversation that began at the Workshop on Values in Scientific Research, which took place at the Center on October 9-11. Our intention is to create in this cyber-location a public forum for discussing how various types of values enter into judgments and decision-making in the practice of scientific research. Below, you will find the "Executive Summary" of the Workshop, summaries of the papers presented, and summaries of the discussion that followed each presentation. The Summaries are here for two principal reasons: to disseminate the results of the Workshop and to help fuel discussion. Click here to (re)familiarize yourself with the list of participants and the titles of the papers that were presented at the Workshop.

Please send your comments to the following address: pittcntr@pitt.edu, and they will be posted on this page.

Executive Summary
Prepared by Erik Anger and Gualtiero Piccinini
Department of History and Philosophy of Science, University of Pittsburgh

The influence of values on science is an important philosophical issue. As of yet, there is no clear analysis of what values enter science, and of where, how, and with what effects they do. To make a step toward understanding the relation between values and scientific research, the workshop brings together a number of scholars who share an interest in this issue. The disciplines to which they belong are not only philosophy of science, but anthropology, ethics, medicine, history of science, rhetoric, feminist theory, and sociology. The workshop centers around four questions.

1. Where do values enter science?

Here the answer is straightforward: everywhere. Most speakers provide examples of the many places in which values enter scientific research. In particular, some illustrate the influence of non-epistemic values on the central stages of scientific research, as opposed to the peripheral stages (i.e., choice of a topic for research, and use of results).

Beatty shows how patterns of reasoning that are constitutive of a science can derive from contextual values; Hankinson and Wylie describe the influence of value-laden assumptions on the choice of research programs; Douglas shows how values can influence the classification of specimens into different categories, thereby partially determining what the data are; Fischhoff discusses some of the ways in which non-epistemic values bear on choices concerning how data should be collected and interpreted; Needleman reports on how the practical consequences of scientific hypotheses can bias the weighing of evidence.

In the discussions, methods are proposed to eliminate the possibly negative influence of non-epistemic values on the work of scientists (e.g., tests of reliability for experts’ judgments, or algorithms for hypothesis generation and search). The usefulness of such methods, and the importance of checking for cases where non-epistemic values have a negative effect on science, are largely agreed upon.  However, the participants seem to agree that the influence of non-epistemic values is both unavoidable, ubiquitous, and legitimate at all stages of scientific research.

2. How do values influence scientific research?

Machamer, in his introductory remarks, suggests a simple way to answer this question. Suppose a scientist has to make a choice (about what experimental technique to adopt, what statistical package to use, what conclusions to draw, etc.), and think of the choice as being the conclusion of an argument. If value judgments influence the choice, then they must be contained in the premises to that argument.  The suggestion proves useful, for many of the examples discussed in the workshop focus on value-laden assumptions made by scientists. For example, scientists have been guided by the assumption that God’s intentions are reflected in nature (Beatty), that male (rather than female) embryonic development requires scientific explanation (Hankinson and Wylie), or that public health is more valuable than corporate profit (Douglas, Needleman). Douglas, following a suggestion of Hempel (1965), proposes an alternative way of understanding how non-epistemic values are relevant to the choices made by scientists. In deciding whether to accept hypotheses, scientists make use of rules of acceptance. Rules of acceptance tell scientists what hypotheses are sufficiently supported by evidence and what are not, given the consequences to be expected from the acceptance of the hypotheses. Since the evaluation of the consequences of accepting an hypothesis must often rely on non-epistemic values, rules of acceptance embody such values. More generally, other participants provide examples in which non-epistemic values influence scientists by justifying their rules of inference. For example, a fundamental idea in 19th Century British biology allowed for the inference from biological phenomena to divine intentions (Beatty); how much attention is paid to avoiding Type I vs. Type II errors, which in turn influences a number of methodological choices, can depend on the evaluation of the practical consequences of certain conclusions (Needleman).

3. What kinds of values enter science?

Throughout the workshop, two themes emerge repeatedly. One is that we do not possess clear-cut distinctions between epistemic and non-epistemic values, or contextual and constitutive values.  In particular, what values are constitutive of a given science depends in part on contextual values, and changes over time (and, at the same time, from one community of scientists to another). As a result, some of the participants express doubts about the possibility of providing criteria to distinguish between different kinds of values. However, the distinctions appear useful, and are widely used in the discussion. The common wisdom seems to be that, despite their vagueness, at least the distinctions between epistemic and non-epistemic, constitutive and contextual, are useful ones and deserve to be retained.

The second theme is that the distinctions between epistemic and non-epistemic, or contextual and constitutive, do not coincide with the distinction between legitimate and illegitimate values, or between positive and negative influence on science. The issue of what values make science good or bad is a separate one.

4. How do we tell when values make science good or when bad?

This is perhaps the most difficult question raised during the workshop.
There appears to be no general answer, but it is possible to evaluate the influence of non-epistemic values on science in individual cases. Careful study of the science and history of science is required to assess each case. Two suggestions for evaluating individual cases emerge during the workshop.

Hankinson and Wylie propose, at least in some cases, to use objectivity of knowledge (i.e. reliability, empirical and explanatory adequacy) as the benchmark for the evaluation of the role of non-epistemic values. This suggestion is partly similar to Longino’s (1990) idea that we ought to check for contextual values, aiming at objectivity. However, Longino’ssuggestion that the checking of contextual values i made possible by the presence of many points of view is criticized. Seidenfeld held that a goal ought to be to replace obviously value-laden methods with "algorithmic", testable methods wherever possible.

In other cases, what is good science seems to depend on the purposes science is supposed to serve. The use of science, and the values that inform our evaluation of its applications, provides us with criteria for evaluating whether the influence of values on science is positive or negative (e.g., if we are interested in public health vs. corporate profit, we may use the realization of those values as criteria for assessing choices made by scientists). Though it should be noted that one could have "bad" scientific practices for a "good" end, and conversely.

SUMMARIES OF PAPERS AND DISCUSSION SESSIONS
Prepared by Erik Anger and Gualtiero Piccinini
Department of History and Philosophy of Science, University of Pittsburgh

Peter Machamer's Introductory Remarks

Peter Machamer points out that there are many ways in which values enter in science, and many places where they do. In the literature on science and values, a satisfactory analysis of how values influence the doing of science, at each of its stages, is still missing. Machamer raises four questions. The first is about where values enter the scientific enterprise. Traditionally, two places have been recognized:

Choice of Research Project
Use of Results

In deciding what is an interesting scientific problem, or how scientific knowledge should be applied, non-epistemic values are clearly essential. But there are other places where values enter in science, and some of these involve the central stages of the actual doing of science. As a general framework for thinking about the question, Machamer suggests that we look at the general structure of a scientific paper:

Review of the Literature, Statement of the Problem
Experimental Paradigm
Implementation of the Experiment, Gathering of Data
Interpretation and Implications of the Results

According to Machamer, value judgments influence scientists at each of the stages corresponding to the sections in a scientific paper.

Next, how do value judgments influence the doing of science? This is the second of Machamer's questions. He describes the situation as follows. During each stage of research, scientists have to make decisions. For example, they have to decide what their research is going to be about, what experimental technique to choose, what statistical software package to use, what conclusions to draw (suggesting further work, theoretical consequences, and potential applications of the results), etc. Each of the above decisions can be thought of as the conclusion of an argument, where values enter in the premises. In other words, among the premises leading to a certain decision, some are value judgments (such as, "In doing research of kind K, software A is better than software B").

As a third question, Machamer asks what kinds of values enter in science. Many philosophers distinguish between epistemic vs. non-epistemic values, cognitive vs. social values, constitutive vs. contextual values, etc. More generally, the list of possible kinds of value is indefinitely long (cf. aesthetic, religious, moral, pragmatic, etc.). But it has proven very difficult, if at all possible, to provide identification criteria for something to be classified as one kind rather than another kind of value. Should we, perhaps, abandon the distinctions and talk about values without further qualification?

Whether or not we have a viable taxonomy of values, we still need to mention a very important problem about science and values. This is Machamer's fourth and last question. We need to judge when the influence of values on science is good or bad. If values are always an intrinsic and ineliminable part of doing science, we need to be able to assess not only how and where they work, but to develop criteria for when they make for good science and when for bad. We need to understand when, and why, values produces desirable effects on science, and when they produce undesirable effects.  We need criteria to assess whether, and how, values contribute to a good science.

Laurence Thomas on The Moral Self

In the history of science and technology, four goods have never been questioned: life, communication, control (over the environment, life, and so on), and speed. Moreover, a fifth good has never been seriously put into question: maximization, i.e. maximization of speed, control etc. The latter good is, in fact, a kind of super-good. At this time, however, we have reached a point when we have to ask ourselves when enough is enough. How far do we want to pursue the four values above? At what point are we saturated? Furthermore, by what criteria should we judge when enough is enough?

Recent developments in science and technology infringe on our ability to affirm each other as human beings, and on our ability to produce, as we may call it, moral warmth. The moral self is a fragile creature which thrives best under conditions of sustained and freely forthcoming affirmation, but technology now inhibits our ability to give that affirmation. For example, consider the service ‘call waiting’. By now, it is very nearly the norm to be put "on hold" for telephone calls between residential homes. Say that you call me in order to tell me something of great importance to you. I am not unlikely to say: "I’m on the other line, can you hold on for a minute?". So, instead of participating in a wonderful moral experience, I have put your emotions on hold. A measure of affirmation has been lost.

It used to be clear that the four values above were worthwhile pursuing. By now, however, many of the problems that science and technology set out to solve have already been solved. New problems call for a solution, but we continue to pursue the same values as we formerly did. It may be that in our mindless consumption of technology, we will have unwittingly shorn ourselves of the moral warmth that is essential to our humanity.?

DISCUSSION

Many of the responses to Thomas’ paper concern the claim that there is something unique about the present point in history. Machamer objects that similar questions and problems were brought up long ago. In the 17th century, two different goals of science were specified: (1) to serve the needs and desires of human beings; (2) to find objective knowledge (where values were not allowed to enter). The distinction is mirrored in the post-W.W.II distinction between (1) basic research, and (2) applied research. There is a tension, long recognized, between the two goals of science. Should scientists look for knowledge no matter what the applications of that knowledge are going to be.

Bushee points out that she sees ‘call waiting’ and other services as part of a wide range of inventions that serve employers to control their employees. As such, there is nothing new about ‘call waiting’Labor laws can be devised in order to limit the possibilities for employers to use technology in order to control their workers.

Wylie thinks insights from archaeology shows that the four values Thomas described were not accepted uncritically during all of history. There are societies in which life, speed and communication were far from unquestioned; in fact, they may not even been adhered to. Wylie suggests that the four values became prominent during the industrial revolution some 100-150 years ago. Wylie adds, though, that this does not show that we do not pursue the four values more than ever before. Furthermore, it may still be worthwhile asking whether we are at a threshold.

Heather Douglas on Values and Inductive Risk in Dioxin Science

In her talk, Douglas introduces Carl Hempel’s view on non-epistemic values and science (Hempel 1965), and applies it to an example taken from animal studies on cancer induced by dioxin.

Hempel distinguishes between confirmation of a hypothesis by evidence, and acceptance of a hypothesis by scientists. A scientific hypothesis can be supported by empirical evidence to a higher or lesser degree, which has nothing to do with non-epistemic values. However, the confirmation relation itself is insufficient to decide whether a hypothesis should be accepted by scientists as part of scientific knowledge. For, no matter how strongly supported by empirical evidence, a hypothesis could be false, and no matter how strongly disconfirmed by empirical evidence, a hypothesis could be true. The possibility that a false hypothesis is accepted as true, or that a true hypothesis is rejected as false, is called inductive risk. Every time scientists accept or reject hypotheses, there is inductive risk. When there is inductive risk, non-epistemic values are necessary to decide what risks can be taken and what cannot. For example, when practical applications depend on the acceptance of a hypothesis, the evaluation of the practical consequences of accepting the hypothesis, and the cost of such consequences, must be considered in deciding whether or not to accept the hypothesis. According to Douglas, the kind of inductive risk described by Hempel is present in many "internal" stages of science, such as choice of theory, choice of method, data collection, and data interpretation.

Douglas provides an example taken from animal studies on the effect of dioxin, where non-epistemic values influence experts’ judgments that generate data. A study by Kociba et al (1978) has been influential to decisions about what degree of concentration of dioxin in the environment is acceptable. Kociba et al studied the rate of liver cancer in rats exposed to dioxin for two years. To decide what rats have cancers, samples of their liver tissues were mounted on slides. The slides of the Kociba study were evaluated by three distinct groups of toxicologists, at three different times. Interestingly, the three groups of toxicologists judged the rates of cancers to be quite different. This is only partly due to the fact that the third group, in 1990, adopted new and supposedly clearer criteria to evaluate the presence of cancers in rat liver tissues (leading to a decrease of the number of detected cancers). As a matter of fact, the seven experts of the 1990 group sometimes disagreed, and the group had to decide about some slides by taking majority votes. This case illustrates how the same standard can be implemented differently by different experts. It is important to realize that a tendency to either overestimate or underestimate the number of cancers in the slides will have a different impact on how dangerous dioxin appears, leading to different decisions about how much dioxin is tolerable in the environment.  If an expert is more concerned with the interests of chemical industries producing dioxin, she may be very careful in avoiding judging healthy livers as having cancer (false positives), thereby risking—in borderline cases—to judge livers with cancer as healthy (false negative); vice versa for an expert more concerned with public health.

Douglas concludes that non-epistemic values are both ineliminable and legitimate in science, resulting from their relevance on decisions made in the presence of inductive risk.

DISCUSSION

Teddy Seidenfeld makes a suggestion. In cases like that described by Douglas, it should be possible to test the reliability of the experts by applying statistical tests, in the same way as the reliability of an instrument can be tested. These tests would tell us, for example, how reliable each expert is in making judgments about the presence or absence of cancer in rat livers. Running reliability tests might be expensive and difficult, but it is an available option. Once we know the degree of reliability of each expert, we can take their biases into consideration in evaluating their judgments, thereby eliminating the non-epistemic values that their judgments embody. Since the statistical analysis of the reliability of an expert does not seem to involve any non-epistemic value, Seidenfeld thinks that, by following his suggestion, we could eliminate the influence of non-epistemic values from judgments of the kind described by Douglas, obtaining better science as a result.

Seidenfeld’s suggestion elicits a lively discussion, which brings in to focus a number of issues. Some comments regard the feasibility of Seidenfeld’s proposal Whatever the merits of the proposal for an ideal science, we have to deal with actual science, as practiced by real scientists (Okruhlik). If one should try to implement Seidenfeld’s proposal on a large scale, presumably she would encounter a number of psychological and social resistances (Fischhoff). Moreover, the evaluation of expert reliability presupposes the possession of a gold-standard against which such reliability can be tested. In many cases, a goal-standard is not available. Especially when there is social pressure to take decisions affecting the health of citizens, science juries are required to emit verdicts in the absence of such goal-standards (Needleman).

A second group of comments calls attention to the fact that accepting Seidenfeld’s proposal would still not eliminate the influence of non-epistemic values from science. For example, Machamer agrees that there would hardly be any non-epistemic value at play in running a statistical test of reliability, which would allow one to eliminate the individual biases of the experts. However, doing so would highlight a number of other decisions that rely on non-epistemic values. For instance, in deciding whether there is a goal-standard for cancers in rat livers, and in deciding how to apply it to real borderline cases, one needs to decide how many false positives and false negatives should be tolerated. More generally, both Bushee and Machamer emphasize that there are cases where non-epistemic values in science play a desirable role (for an example, see the talk by Hankinson and Wylie).

Herbert Needleman on Values and Type II Fallacies

Science is not as value-free as some would have us believe. Generally speaking, scientists consider it scientifically rigorous to adopt behavior that avoids Type I errors, i.e. accepting spurious relationships as real. Considerably less attention is given to avoiding Type II errors, i.e. rejecting true relationships as spurious.

Clearly, it is important to reject superstitious or unfounded beliefs and hypotheses. Meanwhile, it is equally important that true relationships not be dismissed as accidental. The field of environmental toxicology provides fine illustrations of the latter fact. When it comes to questions in toxicology, billions of dollars and millions of lives are at stake.

Consider the example of childhood lead poisoning. Here, these issues have emerged repeatedly over time before regulatory agencies, and there is a clear bias towards Type II fallacies. A number of studies, including those of Needleman himself, have demonstrated a correlation between elevated lead levels and number of deficits in children, including attentional dysfunction, poor language function, reading disabilities, impaired classroom performance, decreased IQ scores, aggression and delinquency. The findings are paralleled by animal investigations, and clearly speak in favor of the hypothesis that lead exposure causes deficits.

Yet, industry spokespersons and academic investigators supported by the industry maintain that the hypothesis should be rejected. They claim that (1) The area is controversial since some studies find an effect, and others do not; (2) It may be that the true causal relation is the inverse: that impaired children ingest more lead; (3) There may be confounding factors; (4) Causality has not been proven; (5) If there is an effect of lead on behavior, the effect is small.

In this context, these objections are not taken seriously anymore. But the general question remains: at what cost should we avoid Type II fallacies?

DISCUSSION

Rasmussen wants to know what errors appear in the industry studies? Needleman responds that the industry studies often choose to study insignificant measures, e.g. brain waves when behavior would be more significant. Fischhoff adds that the scientific status of epidemiology is often put into question when its results are unpalatable. Needleman notes that, often enough, animal studies are criticized on the grounds that they concern another species.

Wylie wonders what underlies the stance of the scientists who are supported by the industry and provide bad arguments against the hypothesis. Do they give up their adherence to epistemic values which they honor in other contexts? Or, do they apply them differently? Fischhoff responds that both hubris as well as the admiration and money provided by the industry play a role.

Seidenfeld adds that there are statistical tools that could be used to clarify these issues, and that the field appears to provide fertile grounds for the development of further statistical methods.

Teddy Seidenfeld on Measures of Incoherence

Teddy Seidenfeld reports the results from a paper written jointly with Mark J. Schervish and Joseph B. Kadane. The paper presents one way to measure degrees of incoherence.

On the standard account of incoherence, an agent’s distribution of probabilities is incoherent if and only if the distribution satisfies the axiom of the probability calculus. The argument in favor of this view notes that if you were to violate the axioms, a clever gambler betting against you can choose bets so that you are a sure loser. This argument is called the Dutch Book argument. In this view, an agent is either coherent or incoherent and there is no difference in degree.

It is very easy to fall into incoherence. For example, if you round off the probabilities at the nth decimal place, you may become incoherent. Schervish, Seidenfeld and Kadane propose one way to make sense of the idea of degrees of incoherence, so that an agent can be characterized as slightly, or grossly, incoherent. There are at least three different ways of measuring degrees of incoherence, and the paper develops one of these.

Intuitively speaking, the idea is to measure that rate at which an intelligent bookie can make you lose money. Two measures are presented. The first index is given by the maximum guaranteed rate of loss to the bookie that the gambler creates, relative to the bookie’s escrow. The second index is given by the maximum guaranteed rate of profit the gambler creates relative to his/her escrow. The ‘escrow’ is the sum over separate gambleof the maximum payment out that a given player faces in each gamble.

One intended application of the results has to do with hypothesis-testing. This research could provide a rational way to choose alpha/beta levels.

DISCUSSION

Ruetsche asks whether the method presented can be used to measure the degree of incoherence of policy? Seidenfeld responds that the project is only in its beginning stages as of yet. However, he does allow for the possibility that it can be used to approximate the coherent stance in certain situations.

Hardcastle is curious about the other ways to measure incoherence, and Wylie wants to know why Seidenfeld et al. chose the particular measure of incoherence they did? In short, Seidenfeld’s answer is that the choice was motivated by a concern for simplicity: it was important to choose a measure that allowed for the derivation of interesting results.

Baruch Fischhoff on Interpretive Liberties in the Social Sciences

Fischhoff’s paper concerns value-judgments that social scientists have to make when investigating people’s beliefs and values. Social scientists can gain access to people’s beliefs and values in two different ways. Theycan ask subjects questions, and analyze their answers, or they can observe subjects making choices, and analyze their responses. In both cases, interpretation is required to make sense of responses. Since the subjects’ responses serve as data for theory construction, policy recommendations, etc., the accuracy of the interpretation is crucial for the reliability of the theories.

Fischhoff examines the interpretation processes in terms of the contract between those who ask the questions and those who provide the answers. Violations of this contract can afford the social sciences unwarranted power. However, there are methods which can be employed to decrease the risks involved by improving the channels of communication between the questioner and the answerer.

One important question scientists have to face is how much that legitimately can be read into the subjects’ responses. At one extreme there are gist studies, in which investigators claim no more than to have received a general answer to a general question. At the other extreme lie contract studies, in which investigators seek respondents’ consent to a specific proposal. Reading too much into subjects’ responses can be both beneficial and destructive for the subjects, but clearly violates the contract between answerer and questioner. For example, consider the many studies concerned with perception of risk. One can ask whether the notion of risk means the same to respondents and investigators; if not, the conclusions drawn be unwarranted.

Of course, there are techniques to handle problems of interpretation, techniques that aim to minimize the number and seriousness of misunderstandings. Nevertheless, value judgments enter at many different stages process. This raises an important question: whose values should be allowed to enter? Shouldn’t the values come from the public?

DISCUSSION

Since it had been pointed out that some problems can be avoided by giving subjects more information about the task at hand, Machamer asked whether more information might not introduce a bias? If subjects know too much about what is going on, they may realize that they have an incentive to answer in a way contrary to their original inclination. Seidenfeld responded that there are methods to handle such problems.

Lynn Hankinson Nelson And Alison Wylie on Coming to Terms with the Values of Science: Lessons from Feminist Science Scholarship

Wylie relates the work of feminist scholars to the value-ladeness of science. Faced with the issue of the influence of non-epistemic values in science, much feminist scholarship falls neither in the camp of those who renounce the objectivity of science in favor of social constructivism, nor in that of those who affirm that science is objective despite the influence of non-epistemic values. The work of many feminists show how science can be objective precisely because of the values entering in it. Feminists have provided both critical and constructive criticism to the disciplines they engage, making substantial contributions to them. Being explicitly motivated by concern with sex/gender issues that influence science, feminists have been able to focus on implicit or neglected assumptions underlying research in the social and life sciences. They have exposed systematic patterns of error even in the best scientific research, and suggested novel research programs.

Once we accept that non-epistemic values have an influence on science, and wish to include this influence in an analysis of science and its objectivity, we need a way to understand objectivity that does not conflict with a value-laden science. Following Lloyd (1995), let us distinguish two senses of objectivity:
(1) objectivity of knowers, meaning neutrality and disengagement of the epistemic agent;
(2) objectivity of knowledge, meaning that knowledge should be reliable (empirically adequate, robust, explanatorily powerful).
Objectivity of knowledge need not be a consequence of objectivity of knowers. An example will show how feminists, who are not objective in the sense of (1), have contributed to making scientific knowledge more objective in the sense of (2).

The example is illustrated by Hankinson Nelson. A fundamental assumption in biology was that female state and female fetal development are, respectively, the default state and the default trajectory of fetal development. A consequence was the concentration of research on what needed to be "added" to an embryo to make it a male, and the neglect of the role of entities classified as "female" (estrogens and progesterone, the fetal ovary, the maternal environment). Until the 1980s, the standard textbook account of human embryonic sexual differentiation was an account of male embryonic development, while the female embryonic development was left unexplained. The traditional account of embryonic differentiation was well connected to a large body of biological knowledge, and the theory had the virtues of simplicity and generality of scope (being generalizable to many species). It had those virtues, though, at the expenses of empirical adequacy, explanatory power, and generality of scope with respect to female embryonic development. Given such trade-off between different sets of epistemic virtues, feminist biologists pointed out that it was androcentrism, a contextual value, that had affected the choices of previous scientists. By drawing attention to the limitations of the old theory, feminists stimulated research toward a broader and more adequate knowledge of embryonic development. So, contextual values enter the mix that generate choices made by scientists, but they can lead to advances.

DISCUSSION

As another example of a contextual value influencing science, Schwartz mentions the case of anthropocentric ideas and methods in animal studies. However, Bruce Glymour proposes an alternative way out of the restrictive influence that contextual values have on science. The problem can be understood as that of an unconscious restriction of the hypothesis space on the part of the scientists. Glymour suggests that, to expand the hypothesis space, it may be unnecessary to introduce feminist or any other kind of non-epistemic values. If one wants to correct this unconscious restriction of scientists, one can use better heuristics to expand the hypothesis space, which would help to take into consideration all the relevant hypotheses. Thomas objects that, in practice, scientists would simply never think about certain things without being personally motivated. In fact, Hankinson insists, in the example given only feminists saw certain limitations in current biological theories, precisely because of their value-laden motivations.

Glymour insists that there is no reason to rely on historical accidents such as these. If we find good algorithms to generate scientific hypothesis, wouldn’t their use eliminate the potentially misleading reliance on values and background assumptions? Wylie replies that while algorithms are welcome where they can be used, we still cannot do science without the critical insights of people with different points of view (which depend on different contextual values). The historical process of criticism and refinements of the contextual values entering science is similar to an indefinite process of calibration of an instrument, never to be completed.

Hankinson Nelson adds that, throughout history, people have become more aware of the "right" values. Mechanisms ought to be introduced, within scientific communities, to question "wrong" values and to reward studies that adopt the "right" values (e.g., ensuring resources for their publication). The same holds for scientific practices and education. For example, humanities should be introduced in scientific curricula.

Beatty asks whether non-epistemic values play an explicit or an implicit role in influencing scientists. For example, do scientists argue for simplicity as motivating androcentric views, or does it just happen this way? Hankinson Nelson responds that, in the case of embryology, simplicity was argued for. But, although androcentrism facilitates certain choices, it is by no means the only factor involved. To understand the complex influence of contextual values on science, careful study the history of science is required.

John Beatty on Constitutive and Contextual Values: The case of Natural Theology and Darwinian Evolutionary Biology

According to Longino (1990), constitutive values govern "acceptable scientific practice or method," while contextual values "belong to the social and cultural environment in which science is done." However, constitutive values can be contextual at the same time. As an example, Beatty presents an important constitutive value, honored by figures such as Malthus and Darwin. In 19th Century British biology, final causes were thought to be indispensable to make sense of the world. The assumption was that God can be seen in the working of nature, where his intentions are reflected, and that God’s intentions can be revealed bythe scientific study of nature. The idea of final cause was described by Whewhell as one of the fundamental ideas of biology, necessary to biological reasoning.

Malthus used the notion of designed law, which allowed one to discover and understand God’s intentions without postulating divine intervention in the world. The discrepancy between the arithmetic increase of resources and the exponential increase of the population was a lawlike mechanism introduced by God to ensure that human beings would progress intellectually. Darwin took from Malthus not only the mechanism for evolution, but also the natural theological framework that made sense of evolution. By this light, evolution by natural selection consists in a process of sorting out good biological structures, whose final effect is the improvement of the species.  According to Darwin, evolution by natural selection is the noblest way to see God’s intentions in nature Later in his life, Darwin became dubious about the existence of this norm or final cause of natural selection. When he discussed the matter with his friends, when natural selection was not an issue, he raised his doubts. But he kept his theological reasoning in all editions of his work, despite many opportunities to take it out. The natural theological reasoning adopted by Darwin was clearly constitutive of 19th Century British biology; at the same time, it was clearly contextual, and was dropped soon after Darwin’s theory was accepted

On the grounds of this example, Beatty turns back to Longino’sdistinction between constitutive and contextual values. Does the distinction break down? Longino considers objectivity the super-constitutive value of science, which is realized by checking contextual values. Moreover, Longino defends her view on democratic grounds, arguing that the existence of different points of view ensures that contextual values are detected and criticized. However, Beatty notes that the roots of Longino’s notion of objectivity derive from 18th Century moral theology, and can therefore be said to be contextual. Is, then, the distinction useless? Not necessarily. For example, the distinction basic/applied research has to do with the way in which the research is justified. The use that is made with certain words (like simplicity in Wylie and Hankinson Nelson’s example) make them contextual or constitutive

DISCUSSION

Fischhoff takes up the notion that science needs democratic back-up. Since we are in the post cold war era, we need a new characterization of US science. One desirable thing is input from the public on matters of public policy. Beatty thinks Fishhoff’ssuggestion fits under the more general rubric of the public accountability of science. But Machamer challenges the view that science needs a democratic justification. He traces the influence of the idea that science has to do with democracy back to the rise of bourgeois democracy and modern capitalism in the 18th Century. He adds that justifying science on democratic grounds has nothing to do with the constitutive values of science, suggesting that this sort of justification is itself nothing but a contextual factor. Why would anybody want public input on matters of public policy? What does this have to do with public health or public interest?

Okruhlik finds Machamer’s story too simple Science is far from been founded on a democratic ideal. Many non-democratic values have been deeply embedded in the history of science. More generally, science is hardly a democratic enterprise. Machamer replies that he is talking about how science has been justified in the context of Western governments. The justification is based on (market based) individualism with a democratic overlay. This is the general form of social justification given for science since the 18th Century.

Then, Machamer asks: is there a viable distinction between constitutive and contextual? Some people —like Beatty—think so, even if constitutive values change over time. Douglas agrees, thinking that it is not very interesting that we cannot make a clear distinction. If what hangs on the distinction is objectivity, we should keep the distinction even if vague, and keep checking for contextual values.

 
Revised 7/15/06 - Copyright 2006