In Technology Studies, 3(2)263-267, 1996

Decision Support Systems in Genetic Counseling


(Comments on Information, Knowledge and Values in Genetic Decision Making by Dr. Sue P. Stafford)

Marek J. Druzdzel
University of Pittsburgh
Department of Information Science
and Intelligent Systems Program
e-mail: marek@sis.pitt.edu

Marek J. Druzdzel is an Assistant Professor in the Department of Information Science, Intelligent Systems Program, and Medical Informatics Training Program at the University of Pittsburgh. He earned his M.S. degrees (Computer Science and Electrical Engineering) from Delft University of Technology, Delft, The Netherlands and his Ph.D. degree (Engineering and Public Policy) in 1992 from Carnegie Mellon University, Pittsburgh, USA. His research interests include decision-theoretic methods in decision support systems, user interfaces to decision support systems, and intelligent tutoring systems.


Abstract

Dr. Stafford's article addresses issues that are of increasing importance and urgency: Are we prepared to deal with the advances in genetic information and, in particular, results of prenatal tests? How can we assist parents confronted with test results indicating possible genetic disorders of their babies? Can computer programs be of help to human decision makers? I am in agreement with Dr. Stafford in that computer aids to decision making have a great potential in genetic counseling and I describe one class of decision support systems (DSS) that seems to be particularly well suited for this task: systems based on decision-theoretic principles.

Human Judgment and Decision-making under Uncertainty

Dr. Stafford describes a realistic scenario in which a pregnant woman, Amy, consults a practitioner concerning her pregnancy and consents to a battery of genetic tests on her baby. She then convincingly argues that Amy is not well equipped for interpretation of the results of these tests and she is in dire need of support in her decision making. Finally, she suggests that the genetic counseling services are currently not adequate and unprepared for the increasing incidence of genetic testing. One solution that she proposes is employment of computer-based decision support systems.

While the potential severity of problems associated with unfavorable results of genetic tests is exceptionally high, from Amy's and her counselors' point of view these problems can be viewed simply as instances of complex decision making. The complexity originates from uncertainties (uncertainties related to the results of genetic tests are large, see for example Hubbard and Lewontin, 1996), multiple conflicting objectives, and high consequences. Examples of other members of this class include interpretation of medical test results or risks related to medical procedures in general, command and control of military combat units, control of nuclear power plants, management of complex industrial processes, corporate hiring, university admissions, and many others. There seems to be a considerable agreement among behavioral psychologists that human judgment and decision making in complex, stressful situations is usually far from optimal. Cognitive heuristic that help us in reducing the complexity of reasoning lead often to systematic errors, known as judgmental biases (Kahneman et al. 1982).

One might hope, and Dr. Stafford seems to support this view, that Amy's counselors will not be subject to judgmental biases and will do a good job in assisting her in her difficult decision. Here I seem to disagree with the author, and this disagreement has a profound impact on our respective view of the philosophical and technical foundations of DSSs. Dr. Stafford writes: "The ability to see and understand the relationships and what they mean apparently comes with experience" and later "[experienced underwriters] are able to see a complexity of interrelationships that is simply unnoticed by others. What experience provides is a knowledge of how pieces of information are related and a capacity to apply that knowledge efficiently." While empirical evidence suggests that experts indeed perform better than novices within their area of expertise, they also seem to be liable to the same judgmental biases as novices. In addition to laboratory evidence, there are several studies of expert performance in realistic settings showing that expert performance is inferior even to simple linear models (an informal review of the available evidence and pointers to literature can be found in an excellent book by Dawes (1990)). And so, predictions of future violent behavior of psychiatric patients made by a panel of psychiatrists, who had access to patient records and interviewed the patients, were found to be inferior to a simple model that included only the past incidence of violent behavior. Predictions of marriage counselors concerning marital happiness were shown inferior to a simple model that just subtracted the rate of fighting from the rate of sexual intercourse (again, the marriage counselors had access to all data, including interviews with the couples). Similar studies have been conducted with bank loan officers, physicians, university admission committees, etc. Benjamin Franklin once wrote "experience keeps a dear school" adding later "yet fools will learn in no other [school]" (Franklin 1757/1773). Experience alone seems to be too costly to rely on.

There are two philosophically distinct approaches to computer-based decision support. The first, represented in the approach taken by so called "expert systems" aims at building systems that imitate human experts. The danger of this approach, increasingly appreciated in the artificial intelligence community, is that along with imitating human thinking and its efficient heuristic principles we may imitate its undesirable flaws, empirically demonstrated in case of judgment and decision making under uncertainty. The second approach is based on the assumption that the soundest way of dealing with complex decisions is through a small set of normatively sound principles of how decisions should be made. The goal of a DSS, according to this view, is supporting unaided human intuition, just as the goal of using a calculator is aiding human's limited capacity for mental arithmetic.

Decision-theoretic Decision-support Systems

A normative theory of decision making that has gained the strongest ground is based on the subjectivist Bayesian view of probability theory and is known simply as decision theory. While probability theory provides a formalism for treatment of uncertainty, decision theory extends it with a set of principles for consistency among preferences and decisions. Preferences describe relative valuations of outcomes, while decisions are actions that are under decision maker's control.

The art and science of applying statistical and decision-theoretic methods to aid decision making in the real world is known as decision analysis. Decision analysis is based on the paradigm that people are able to reliably store and retrieve their personal beliefs about uncertainty and preferences for different outcomes, but are much less reliable in aggregating these fragments into a global inference. This approach is often also called prescriptive (as opposed to descriptive), as it prescribes the action to be taken. Given the beliefs elicited from the decision maker, it tells what the decision maker should do if she is willing to act on her beliefs. Decision analysis includes quantities of methods for model construction, such as methods for elicitation of probability distribution that allow to minimize human bias, methods for checking the sensitivity of a model to imprecision in the data, computing the value of obtaining additional information, etc. (see, for example, Von Winterfeldt & Edwards 1988 for a basic review of the available techniques). The goal of decision analysis is providing insight into the decision and this insight, consisting of the analysis of all relevant factors, their uncertainty, and criticality of some assumptions, is even more important than the actual recommendation.

The theoretical foundations of decision theory and the tools and the experience of decision analysis has in the last ten years or so given raise to so called normative DSSs. These systems are build on the normative principles of decision theory and are capable of handling large decision models (on the order of hundreds of variables). I will argue that they are a good choice for genetic counseling.

Normative DSSs in Genetic Counseling

Probability theory, which is at the foundation of decision theory, provides a technically sound way of combining uncertain information. Few domains have as much uncertainty as medicine -- in fact the first systems based on probabilistic and decision-theoretic methods were built by researchers working in medical informatics communities. Statistics capturing prevalence of disorders are often collected and made widely available. The quality of each medical test, and this includes genetic tests, is described by two numbers: sensitivity and specificity. Sensitivity and specificity are in fact conditional probabilities of the test giving correct information (a positive and negative result given presence and absence of the disorder respectively) and can be directly incorporated into a probabilistic model. The component that Dr. Stafford considered one of the most difficult in decision making, combining various risks and results of tests, comes almost for free in this approach.

One of the problems that Dr. Stafford identified in her paper is the subjective character of patients' decisions. An important element of the normative decision support is the concept of a decision maker, the person (or a group of people such as a corporation or the society) whose decision is being modeled. As I mentioned earlier, decision theory is based on the subjectivist Bayesian approach to probability theory and it carefully spells out the subjective character of any decision. Theoretically, both the measure of uncertainty and the measure of preferences are subjective. In practice, probabilities can be based on available statistics (such as prevalence data or sensitivity and specificity of tests) enhanced with expert judgments. The ultimate model, however, captures the decision maker's view of the decision and should be compatible with the decision maker's system of values and beliefs. This is in agreement with the thorough discussion in Dr. Stafford's paper of ethical issues and reliance on the decision maker's system of values. Subjective information from the decision maker will not only determine the available decision options (e.g., whether she considers abortion a possible choice or not) but also their relative desirability.

The hardest part of building a decision model is determining its structure, i.e., what exactly is relevant to the decision and how various relevant variables are connected to each other. Graphical modeling tools, such as influence diagrams (Shachter 1986) or Bayesian belief networks (Pearl 1988) allow for correct modeling of interactions without making unsubstantiated independence assumptions for which early probabilistic systems have been criticized. These tools allow for correct encoding of scientific and medical knowledge in what forms a coherent and comprehensive model. In case of genetic counseling, large parts of such model, at least everything that relates to the medical data, will be similar or even the same for most decision makers. Parts of this model can be easily reused or, alternatively, the most relevant part of a large model can be identified given a decision problem (Druzdzel & Suermondt 1994).

A DSS will not be effective unless it can communicate with its users effectively. This includes model building and explanation of the results of its reasoning. While there is still ongoing work in these areas, methods have been proposed to deal with both (e.g., (Druzdzel et al. 1995, Druzdzel 1996, Henrion & Druzdzel 1990, Suermondt 1992)).

Normative DSSs offer a theoretically correct and appealing way of handling uncertainty and preferences in decision problems. They are based on carefully studied empirical principles underlying the discipline of decision analysis and they have been applied in several practical systems. This approach seems to address several of the problems identified by Dr. Stafford and I believe that as far as the technical developments are concerned this is the most likely approach to prevail in the long run.

References

  • Robyn M. Dawes (1988). Rational Choice in an Uncertain World. Harcourt Brace Jovanovich, Publishers, San Diego.
  • Marek J. Druzdzel (1996). Qualitative Verbal Explanation in Bayesian Belief Networks. Artificial Intelligence and Simulation of Behaviour Quarterly, special issue on Bayesian belief networks, 94:43-54, 1996.
  • Marek J. Druzdzel, Linda C. van der Gaag, Max Henrion and Finn V. Jensen, eds. (1995). Working notes of the workshop Building Probabilistic Networks: Where Do the Numbers Come From? held in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI-95), Montreal, Canada.
  • Marek J. Druzdzel and Henri J. Suermondt (1994). Relevance in Probabilistic Models: 'Backyards' in a 'Small World'. Working notes of the AAAI-1994 Fall Symposium Series: Relevance, pages 60-63, New Orleans, LA (An extended version of this paper is currently under review)
  • Benjamin Franklin (1757/1773). Poor Richard's Almanac. David McKay, Inc., New York
  • Max Henrion and Marek J. Druzdzel (1990). Qualitative Propagation and Scenario-Based Approaches to Explanation of Probabilistic Reasoning. In P.P. Bonissone, M. Henrion, L.N. Kanal and J.F. Lemmer, editors, Uncertainty in Artificial Intelligence 6, pages 17-32. Elsevier Science Publishers B.V., Holland
  • Ruth Hubbard and R.C. Lewontin (1996). Pitfalls of Genetic Testing. The New England Journal of Medicine, 334(18):1192-1193
  • Daniel Kahneman, Paul Slovic and Amos Tversky, eds. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge, England
  • Judea Pearl (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, Inc., San Mateo, CA
  • Ross D. Shachter (1986). Evaluating Influence Diagrams. Operations Research, 34(6)871:882
  • Henri J. Suermondt (1992). Explanation in Bayesian Belief Networks. Ph.D. Dissertation, Department of Computer Science and Medicine, Stanford University
  • Detlof von Winterfeldt and Ward Edwards (1988). Decision Analysis and Behavioral Research. Cambridge University Press, Cambridge, England.

  • Back to list of publications
    Back to Marek's home page
    marek@sis.pitt.edu / Last update: 10 December 1997