Qualitative and linguistic explanations of probabilistic reasoning in belief networks


Authors:
Max Henrion
Rockwell International Science Center
Palo Alto Laboratory
email: henrion@camis.stanford.edu
(currently with:
Lumina Decision Systems)

Marek J. Druzdzel
Carnegie Mellon University
Department of Engineering and Public Policy
(currently with:
University of Pittsburgh
Department of Information Science
and Intelligent Systems Program
e-mail: marek@sis.pitt.edu)

Introduction:
In debates about the merits of alternative approaches to reasoning under uncertainty, probabilistic schemes have often been criticised for incompatibility with human reasoning [7]. Such criticisms focus on the mismatch between the apparently qualitative and linguistic representations characteristic of human cognition and the formal quantitative representation of probabilistic schemes. Even if one admits the appeal of the normative principles underlying probability and decision theory over heuristic schemes known to be subject to biases and inconsistencies, one may legitimately be concerned about how to translate between human beliefs and knowledge and formal quantitative representations. Such translation needs to be done in both directions, encoding human uncertain knowledge into probabilistic forms, and explaining the results of probabilistic forms and reasoning in a form amenable to human understanding.

Only recently has attention begun to be paid to the issue of automating the generation of comprehensible qualitative explanations of probabilistic reasoning. Elsaesser [4] provides some empirical evidence on the efficacy of explanations of simple Bayesian inference, with one variable and one observation. Sember and Zukemman [10] describe a scheme for generating micro explanations, that is local propagation of evidence between two or three variables in a belief net. In this paper, we present an approach for generating macro explanations, intended to explain probabilistic reasoning over much larger networks.

It is useful to distinguish explanation as communication of static knowledge, as represented in a Bayesian belief network for example, from explanation of dynamic reasoning, of how beliefs are updated in the light of new evidence. We believe that the development of effective explanations is likely to be greatly helped by a deeper psychological understanding of human reasoning under uncertainty. So we began our research by empirical studies of cognitive processes involved in plausible reasoning. As we shall describe, this has led us to develop a novel approach to explaining reasoning based on quasi-deterministic scenarios. We wish to avoid dogmatism about what kinds of scheme will be most effective, but rather explore a variety of approaches, including qualitative and numerical, graphical and linguistic representations. We will illustrate some with fragments of explanations from our prototype explanation system.


Back to list of publications
Back to Marek's home page

marek@sis.pitt.edu / Last update: 14 May 2005