Schedule & talks


Catrin Campbell-Moore – Cambridge/Bristol
Kenny Easwaran
– Texas A&M University
Branden Fitelson – Northeastern University
Pavel Janda – University of Bristol
James M. Joyce – University of Michigan
Jason Konek – Kansas State University
Ben Levinstein – University of Oxford
Richard Pettigrew – University of Bristol
Patricia Rich – University of Bristol
Miriam Schoenfield – University of Texas


Monday 13th June

  • 11am – 11.15am: Tea & coffee
  • 11.15am – 1pm: Branden Fitelson – Two Approaches to Belief Revision
  • 1pm – 2.30pm: Lunch
  • 2.30pm – 4.15pm: Kenny Easwaran – The Tripartite Role of Belief
  • 4.15pm – 4.30pm: Tea & coffee
  • 4.30pm – 6.15pm: Patricia Rich – Accuracy and Strategic Belief Choice

Tuesday 14th June

  • 9.15am – 11am: Miriam Schoenfield – Permissivism, Disagreement, and the Value of Rationality
  • 11am – 11.15am: Tea & coffee
  • 11.15am – 1pm: Ben Levinstein – Higher-Order Evidence, Accuracy, and Information Loss
  • 1pm – 2.30pm: Lunch
  • 2.30pm – 4.15pm: Pavel Janda – Intertemporal Decisions in Epistemic Utility Theory
  • 4.15pm – 4.30pm: Tea & coffee
  • 4.30pm – 6.15pm: Richard Pettigrew – What is conditionalization, and why should we do it?

Wednesday 15th June

  • 9.15am – 11am: Catrin Campbell-Moore – Risk aversion, epistemic rationality, and evidence-gathering
  • 11am – 11.15am: Tea & coffee
  • 11.15am – 1pm: Jason Konek – Reliability-based Jeffrey conditioning and expected accuracy
  • 1pm – 2.30pm: Lunch
  • 2.30pm – 4.15pm: Jim Joyce – Accuracy and Updating


Catrin Campbell-Moore – Risk Aversion, Epistemic Rationality, and Evidence Gathering (with Bernard Salow)

Lara Buchak has recently developed a risk aware decision theory determining what the rational course of action is for risk-averse agents. In “Instrumental Rationality, Epistemic Rationality, and Evidence Gathering”, she has shown that a consequence of her risk aware decision theory is that risk avoidant agents should sometimes not look at freely available evidence because it might lower their risk weighted expected utility. We will show that also from the point of view of epistemic rationality, an agent should sometimes not look at the freely available evidence. This involves modifying the usual ways of measuring the epistemic utility of credal states. We will also show that despite the fact that they should sometimes not look at free evidence, once the agent has seen the evidence, they should nonetheless update by conditionalisation.

Kenny Easwaran – The Tripartite Role of Belief

Belief and credence are often characterized in three different ways – they ought to govern our actions, they ought to be governed by our evidence, and they ought to aim at the truth. If one of these roles is to be central, we need to explain why the others should be features of the same mental state rather than separate ones. If multiple roles are equally central, then this may cause problems for some traditional arguments about what belief and credence must be like.

Branden Fitelson – Two Approaches to Belief Revision (with Ted Shear and Jonathan Weisberg)

In this paper, we compare and contrast two methods for revising qualitative (viz., “full”) beliefs. The first method is a naïve Bayesian one, which operates via conditionalization (and, more generally, via mechanical/minimum distance updating) and the minimization of expected inaccuracy. The second method is the AGM approach to belief revision (which can also be understood in terms of mechanical/minimum distance updating). Our aim here is to provide the most straightforward explanation of the ways in which these two methods agree and disagree with each other, when it comes to imposing diachronic con- straints on agents with deductively cogent beliefs. Some novel (and surprising) convergences and divergences between the two approaches are uncovered.

Pavel Janda – Intertemporal decisions in epistemic utility theory

James M. Joyce – Accuracy and Updating

Proponents of “accuracy-first” epistemology have argued that proper scoring rules can be used to assess the accuracy of credences, and have suggested that certain core epistemic norms for credences can be understood and justified with the help of such rules.  Many who go this route are attracted to the idea that revising credences in light of new evidence should proceed by a process of “divergence minimization”.  The idea is that, upon receiving a new item of data, an agent should move to the credal state, among those consistent with that data, which maximizes expected accuracy.  This provides a rationale for Bayesian conditioning in contexts where the new data involves learning some proposition with certainty, but for less conclusive experiences it can be shown that each proper score has its own characteristic divergence minimizing update rule.  For example, the so-called logarithmic score has Jeffrey conditioning as its associated update, while H. Leitgeb and R. Pettigrew have shown that the Brier (quadratic loss) score has a completely different update.  Since there are overwhelming epistemic reasons to prefer Jeffrey conditioning to any other update rule, it looks as we must either jettison “divergence minimization” or embrace the log score as the one true measure of epistemic accuracy.  Neither option is appealing:  the accuracy-centered approach seems committed to divergence minimization at some level, but its appeal diminishes if it must single out some particular score as uniquely correct.  Moreover, as I will show, the log score has some serious drawbacks when it comes to updating (in certain contexts); specifically, it fails to recognize that certain sorts of belief changes, even those with a high degree of reliability, can decrease accuracy.  Fortunately, there is no real dilemma here.  By adapting a trick due originally to Brian Skyrms to the accuracy context I will show that a proper application of the “divergence updating” process will mandate Jeffrey conditioning as the uniquely correct updating rule in all the contexts where it can be applied.  In the course of making the case for this conclusion some suggestions will be made about how to understand the various “value of learning” results in this context.

Jason Konek – Reliability-Based Jeffrey Conditioning and Expected Accuracy

Ben Levinstein – Accuracy, Discrimination, and Higher-Order Evidence

Richard Pettigrew -What is conditionalization, and why should we do it?

Patricia Rich – Accuracy and Strategic Belief Choice

Miriam Schoenfield – Permissivism, Disagreement, and the Value of Rationality

In the past few years permissivism has enjoyed somewhat of a revival, but it is once again being threatened, this time by a host of new and interesting arguments that, at their core, are challenging the permissivist to explain why rationality matters. A version of the challenge that I am especially interested in is this: if permissivism is true, why should we (in some sense) expect the rational credences to be more accurate than the irrational ones? It has also been argued that permissivism, even if true, doesn’t have the implications for the debate about peer disagreement that it’s frequently taken to have. In this paper I’ll be arguing that, actually, it is those who deny permissivism that will have trouble explaining why rationality matters and why it conduces to accuracy.  I then use these considerations to explain why permissivism does indeed have important implications for the debate about peer disagreement.

The conference is supported by the European Research Council.

full-colour-png    European_Research_Council_logo.svg