5 research outputs found
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks
Computing in the face of uncertainty : from neurons to behavior
Thesis (Ph. D.)--University of Rochester. Dept. of Brain and Cognitive Sciences, 2010.What are the computational mechanisms that underlie perceptual and cognitive behavior?
Any answer to this question must start with the observation that the brain has
to work with uncertain information at every level of analysis. The presence of uncertainty
has the consequence that the problem of computation in the brain becomes one
of probabilistic inference. Indeed, we can recast all cognitive processing as comprising
sequential stages of probabilistic inference, performed over data of varying abstraction.
In this framework, the goal of processing at a particular level is to infer the variable of
interest given the input information and the goal of learning at a particular level is to
improve the quality of the inference that is being carried out.
In this thesis we explore and computationally characterize the inference that underlies
cognitive processing at multiple levels, using multiple research methodologies.
At the neural level, we derive a simple analytic expression that allows for the relation
of network properties to the quality of the inference being carried out during neural
representation and transmission. This derivation provides an important tool that can be
used to elucidate mechanisms leading to efficient inference. We then use this expression
to explore the neural mechanisms that underlie the improvements in behavioral
performance, observed during perceptual learning. We report that perceptual learning
can be neurally mediated through an improvement in the inference process in early
sensory areas. Importantly, this model, in addition to accounting for the training induced
changes in behavioral performance, also captures the training induced changes
in neural properties. Finally, at the behavioral level, we show that human multi-sensory integration
during categorical speech perception is well described by a normative model for optimal
inference, thereby providing behavioral evidence for efficient inference in the brain.
As opposed to previous studies, the study described here computationally and experimentally
probes cue integration in categorical tasks, thereby representing an important
extension of previous work since most real-world perceptual tasks involve judgments
over categorical dimensions
Recommended from our members
Learning and Information Use in an Intergroup Context
When faced with uncertainty, human observers maximize performance by integrating sensory information with learned task-relevant regularities. Does this behavior similarly occur in social settings? In this paper, we explore how reward-seeking behavior in an intergroup context is affected by readily available but task-irrelevant social information (in the form of group membership) when task-relevant reward information can be learned over time. Across two experiments, we show that participants learned and utilized task-relevant regularities to inform their choices. We also show that human observers are not universally biased towards utilizing social information in all settings––participants learned to disregard social information when not relevant to the task at hand. However, learning about the utility of social information (Experiment 2) had a long-term influence on observers’ ability to subsequently learn and utilize available sources of information. Real-world intergroup contexts typically encompass situations and stimuli that have been previously experienced by the observer. Our findings highlight the powerful influence of learning in such contexts
Recommended from our members
Explicit strategies for sensorimotor learning depend on task complexity
Explicit strategies, drawing on working memory and executive function, play an important role in motor learning and adaptation. Here, using a visuomotor rotation task in which participants explicitly reported their aim angle, we examined the influence of task complexity on explicit learning, with an emphasis on capacity limitations that influence the number of unique solutions that observers are able to keep in memory. We found that increasing target set size (from 1 to 4 targets) resulted in slower learning and slower RTs, likely due to a combination of algorithmic simulation and memory retrieval strategies. However, when participants were required to learn four unique target-rotation pairs simultaneously, we observed constant RTs and a similar rate of learning across rotation magnitude, in line with participants explicitly memorizing and retrieving unique solutions for each target. These findings suggest that participants may adopt different explicit strategies depending on the complexity of the sensorimotor task