My current research focuses on better understanding psychological states of confidence and uncertainty. Intuitively, we assign varying levels of confidence to different propositions (e.g. high confidence that it will rain, medium confidence that the Democrats will win the midterms). A variety of recent debates, especially in philosophy of psychology and perception, invite reflection about the fundamental nature of such states. Must levels of confidence have the structure of probabilities? What, fundamentally, distinguishes different possible levels of confidence from each other, yet unifies them as levels of confidence and not some other kind of degreed mental state?
In my current research, I defend an explication of states of confidence, one that requires them to have only minimal structure, and is undemanding of the psychological capacities required to possess them. My explication centers on two conditions. The first connects levels of confidence to levels of evidential support, requiring that the level of confidence in p ought to match the level of evidential support p's truth has given available information. The second connects levels of confidence to levels of psychological reliance, requiring a level of confidence in p to typically cause downstream psychological processes to use or rely on the truth of p to a matching degree.
My explication enables progress in several areas in the philosophy of psychology and perception. First, it can be used to demonstrate how perceptual systems can assign levels of confidence to perceptual estimates (e.g. that surface is red), despite having relatively unsophisticated computational resources. Second, it can be used to help organize recent disputes between realists and instrumentalists about Bayesian models in cognitive science, and to defend a minimal Bayesian realism. Third, it buttresses recent views about the phenomenology of indistinct or blurry perception, according to which such indistinctness reflects the assignment of confidence.
“Representation of Pure Magnitudes in ANS” (with Tyler Burge and Steven Gross). Behavioral and Brain Sciences, 44. 2021.
Works in Progress
E-mail me if you'd like a draft.
[Title redacted for blind review]
In cognition, we can hold different levels of confidence in a proposition. Do perceptual systems similarly attach levels of confidence to their perceptual estimates of environmental features? Recent philosophical discussion invites skepticism that perceptual systems even could assign levels of confidence, however. The skepticism stems from an assumption that levels of confidence must be probabilistically structured, and from unclarity about what would make a set of graded perceptual states assignments of confidence, rather than some other kind of graded state. In this paper, I attempt to rebut such skepticism. I sketch a minimal conception of confidence, one that is structurally spare and that does not require sophisticated psychological capacities. I give two conditions on being a level of confidence: one connecting to epistemic support and the other to roles within subsequent psychological processing. The two conditions are reflected in the notion of confidence used both by commonsense and in disciplines like formal epistemology and the decision-sciences. I then describe an explanatory model of some seminal results in the psychophysics of multimodal sensory cue integration. The model explains these results by associating perceptual estimates with simple sensitivities to environmental objective probabilities. I show how these sensitivities satisfy my minimal conditions on confidence, and thus demonstrate, in an empirically-grounded way, how perceptual systems
could assign levels of confidence.
A Minimal Bayesian Realism for Perceptual Systems
In recent decades, scientific modeling of perception has made extensive and fruitful use of tools adopted from Bayesian decision theory. The success of such “Bayesian models” has led philosophers and psychologists to ask about their realism: how much psychologically real structure is captured by them? I propose, first, a reframing of the question. The relevant questions of realism pertain to the causal-computational structure of perceptual processes. Typically, Bayesian models in perception science are purposely given at a higher level of abstraction, and are compatible with a great variety of different processing models. Some of these processing models will “look” more Bayesian than others, and so there are many ways to identify those that count as Bayesian. Consequently, there are different grades of Bayesian realism, corresponding to how much is required for a processing model to count as Bayesian. I focus on a minimal constraint on a processing model’s being Bayesian: that it involve the formation of credal states, or levels of confidence. Every kind of Bayesian realist should accept this constraint. This minimal constraint, however, is not probative unless we know what is to count as a credal state. To that end, I defend two conditions on being a credal state, one tied to a notion of evidential support and the other tied to a notion of effect on downstream processing. I then describe a popular processing model in perception science: probabilistic population codes. I illustrate how such models embed credal states, understood according to my two conditions. Thus, a minimal Bayesian realism is as plausible as population code models are.