::: about
::: news
::: links
::: giving
::: contact

::: calendar
::: lunchtime
::: annual lecture series
::: conferences

::: visiting fellows
::: postdoc fellows
::: senior fellows
::: resident fellows
::: associates

::: visiting fellowships
::: postdoc fellowships
::: senior fellowships
::: resident fellowships
::: associateships

being here
::: visiting
::: the last donut
::: photo album

::: center home >> about >> international partnerships>>Tsinghua

Pitt-Tsinghua Summer School for Philosophy of Science

Institute of Science Technology and Society, Tsinghua University • Center for Philosophy of Science, University of Pittsburgh


2014  Kevin Zollman, Introduction to Social Epistemology


Suppose two equally intelligent individuals come to discover that they disagree about some statement of interest. Perhaps one believes that China will win the world championship in badminton while the other believes that Thailand will win it. They share their evidence with one another, and both believe that the other is honestly revealing what she knows. Is it possible that these two individuals could persist in their disagreement, even after they have shared all of their private evidence?

Discussion of this question began with the issue of religion. Could two equally rational people who have all the same evidence reasonably disagree about religious beliefs? But there is nothing really particular to religion about this question. We will read two slightly different perspectives on this problem from the philosophy and economics literatures.


Richard Feldman (2004) “Reasonable religious disagreement” Manuscript

David Christensen (2009) “Disagreement as evidence: the epistemology of controversy” Philosophy Compass

Robert Aumann (1976) “Agreeing to disagree” Annals of Statistics

Giacomo Bonnano and Klaus Nehring (1997) “Agreeing to disagree: A survey” Manuscript



Groups must make decisions all the time. We must decide where to go to dinner or what policy to enact. Often, we might simply vote on the policy and adopt the policy preferred by the majority. But other times, we also want the group to adopt a principle that reflects something like the “group opinion.” Does the IPCC believe that climate change is caused by humans? Does the UN believe that torture is wrong? Does the US Supreme Court believe that the Affordable Care Act violates the US Constitution?

First, we might ask whether or not it makes sense to talk about group opinions to begin with. If we think it's sensible, we might also ask how could we come to form these group opinions from a diversity of individual opinions of the group's members. If everyone agreed, it might be simple (although not always). But if people disagree, and we cannot resolve that disagreement, how should the group opinion be formed? This problem is complicated and fraught with difficulty.


Margaret Gilbert (1989) “Chapter 5: After Durkheim: Concerning collective belief” On Social FactsChristian List and Philip Petit (2004) “Aggregating sets of judgments: Two impossibility results compared” Synthese

Morris Degroot (1974) “Reaching a consensus” Journal of the American Statistical Association

Teddy Seidenfeld, Mark Servish, and Jay Kadane (1989) “On the shared preference of two Bayesian decision makers” Journal of Philosophy



As humans, we mostly learn from others. This is one of the major differences between humans and most other animals – we come to know things through other people. Oddly, much of traditional epistemology has ignored this fact, or at least it has treated testimonial knowledge as no different in kind than any other kind of knowledge (like that from measurement instruments). Some philosophers, however, have begun to ask if there might be something different about knowledge gained through testimony.

Much of the debate has focused on the question of what justifies (or “grounds”) testimonial knowledge. But related questions emerge as well, like who should I trust when two individuals disagree? Or who should I seek out when I want to learn about some domain of interest? All of these questions are connected and have been discussed in varying degrees.


Jennifer Lackey (2006) “Knowing from testimony” Philosophy Compass

Alvin Goldman (2001) “Experts: Which ones should you trust?” Philosophy and Phenomenological Research

Conor Mayo-Wilson (2013) “Reliability of testimonial norms in scientfic communities” Synthese

Kevin Zollman (2014) “A systems oriented approach to the problem of testimony” Manuscript



Diversity is valuable for many different reasons. We enjoy learning about different people and different cultures, it keeps our daily lives from becoming boring, and it helps to prevent the unjust domination of one group of people by another. But, if we set all that aside for a moment, we can ask, does diversity make groups better or worse in learning about the world? A quick reaction might be to say “no.” If there is an optimal way to learn about the world, wouldn't the best group be a collection of individuals who employ that optimal strategy?

It turns out that things are not so simple. A number of scholars have argued, from different perspectives, that groups which are diverse might actually outperform groups that are homogenous, even if those groups are made up of individually very good learners. Here we will read several papers that argue to this conclusion.


Miriam Solomon (1992) “Scientific rationality and human reasoning” Philosophy of Science

Lu Hong and Scott Page (2004) “Groups of diverse problem solvers outperform groups ofhigh-ability problem solvers” Proceedings of the National Academy of the USA

Conor Mayo Wilson, Kevin Zollman, and David Danks (2011) “The independence thesis: When individual and social epistemology diverge” Philosophy of Science

Kevin Zollman (2010) “The epistemic benefit of transient diversity” Erkenntnis



The traditional model of science is that a collection of inquirers is motivated solely by an interest in the unknown. They work to learn about the world, with only the cessation of ignorance as their reward. But, science does not work that way. Scientists are rewarded financially, with recognition, with prizes, and with grants.

Some have argued that these “non-epistemic” rewards serve to derail science – to make it less effective or, worse still, irrational. Others have suggested that these rewards have helped to make science more effective by solving certain types of problems inherent in the scientific enterprise. Finally, some have taken a middle ground, arguing that carefully crafted reward structures help while others might hurt.


Partha Dasgupta and Paul David (1994) “Toward a new economics of science” Research Policy 

Justin Bruner (2013) “Policing epistemic communities” Episteme

Philip Kitcher (1990) “The division of cognitive labor” Journal of Philosophy

Kevin Zollman (2014) “The credit economic and the economic rationality of science” Manuscript



Revised 9/11/18 - Copyright 2006-2009