95 research outputs found

    Optimal measurement of visual motion across spatial and temporal scales

    Full text link
    Sensory systems use limited resources to mediate the perception of a great variety of objects and events. Here a normative framework is presented for exploring how the problem of efficient allocation of resources can be solved in visual perception. Starting with a basic property of every measurement, captured by Gabor's uncertainty relation about the location and frequency content of signals, prescriptions are developed for optimal allocation of sensors for reliable perception of visual motion. This study reveals that a large-scale characteristic of human vision (the spatiotemporal contrast sensitivity function) is similar to the optimal prescription, and it suggests that some previously puzzling phenomena of visual sensitivity, adaptation, and perceptual organization have simple principled explanations.Comment: 28 pages, 10 figures, 2 appendices; in press in Favorskaya MN and Jain LC (Eds), Computer Vision in Advanced Control Systems using Conventional and Intelligent Paradigms, Intelligent Systems Reference Library, Springer-Verlag, Berli

    Interaction of perceptual grouping and crossmodal temporal capture in tactile apparent-motion

    Get PDF
    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can "capture'' visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left-or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from -75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs-one short (75 ms), one long (325 ms)-were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects

    Continuous Evolution of Statistical Estimators for Optimal Decision-Making

    Get PDF
    In many everyday situations, humans must make precise decisions in the presence of uncertain sensory information. For example, when asked to combine information from multiple sources we often assign greater weight to the more reliable information. It has been proposed that statistical-optimality often observed in human perception and decision-making requires that humans have access to the uncertainty of both their senses and their decisions. However, the mechanisms underlying the processes of uncertainty estimation remain largely unexplored. In this paper we introduce a novel visual tracking experiment that requires subjects to continuously report their evolving perception of the mean and uncertainty of noisy visual cues over time. We show that subjects accumulate sensory information over the course of a trial to form a continuous estimate of the mean, hindered only by natural kinematic constraints (sensorimotor latency etc.). Furthermore, subjects have access to a measure of their continuous objective uncertainty, rapidly acquired from sensory information available within a trial, but limited by natural kinematic constraints and a conservative margin for error. Our results provide the first direct evidence of the continuous mean and uncertainty estimation mechanisms in humans that may underlie optimal decision making

    Multisensory Oddity Detection as Bayesian Inference

    Get PDF
    A key goal for the perceptual system is to optimally combine information from all the senses that may be available in order to develop the most accurate and unified picture possible of the outside world. The contemporary theoretical framework of ideal observer maximum likelihood integration (MLI) has been highly successful in modelling how the human brain combines information from a variety of different sensory modalities. However, in various recent experiments involving multisensory stimuli of uncertain correspondence, MLI breaks down as a successful model of sensory combination. Within the paradigm of direct stimulus estimation, perceptual models which use Bayesian inference to resolve correspondence have recently been shown to generalize successfully to these cases where MLI fails. This approach has been known variously as model inference, causal inference or structure inference. In this paper, we examine causal uncertainty in another important class of multi-sensory perception paradigm – that of oddity detection and demonstrate how a Bayesian ideal observer also treats oddity detection as a structure inference problem. We validate this approach by showing that it provides an intuitive and quantitative explanation of an important pair of multi-sensory oddity detection experiments – involving cues across and within modalities – for which MLI previously failed dramatically, allowing a novel unifying treatment of within and cross modal multisensory perception. Our successful application of structure inference models to the new ‘oddity detection’ paradigm, and the resultant unified explanation of across and within modality cases provide further evidence to suggest that structure inference may be a commonly evolved principle for combining perceptual information in the brain

    Audiovisual time perception is spatially specific

    Get PDF
    Our sensory systems face a daily barrage of auditory and visual signals whose arrival times form a wide range of audiovisual asynchronies. These temporal relationships constitute an important metric for the nervous system when surmising which signals originate from common external events. Internal consistency is known to be aided by sensory adaptation: repeated exposure to consistent asynchrony brings perceived arrival times closer to simultaneity. However, given the diverse nature of our audiovisual environment, functionally useful adaptation would need to be constrained to signals that were generated together. In the current study, we investigate the role of two potential constraining factors: spatial and contextual correspondence. By employing an experimental design that allows independent control of both factors, we show that observers are able to simultaneously adapt to two opposing temporal relationships, provided they are segregated in space. No such recalibration was observed when spatial segregation was replaced by contextual stimulus features (in this case, pitch and spatial frequency). These effects provide support for dedicated asynchrony mechanisms that interact with spatially selective mechanisms early in visual and auditory sensory pathways

    Duration of Coherence Intervals in Electrical Brain Activity in Perceptual Organization

    Get PDF
    We investigated the relationship between visual experience and temporal intervals of synchronized brain activity. Using high-density scalp electroencephalography, we examined how synchronized activity depends on visual stimulus information and on individual observer sensitivity. In a perceptual grouping task, we varied the ambiguity of visual stimuli and estimated observer sensitivity to this variation. We found that durations of synchronized activity in the beta frequency band were associated with both stimulus ambiguity and sensitivity: the lower the stimulus ambiguity and the higher individual observer sensitivity the longer were the episodes of synchronized activity. Durations of synchronized activity intervals followed an extreme value distribution, indicating that they were limited by the slowest mechanism among the multiple neural mechanisms engaged in the perceptual task. Because the degree of stimulus ambiguity is (inversely) related to the amount of stimulus information, the durations of synchronous episodes reflect the amount of stimulus information processed in the task. We therefore interpreted our results as evidence that the alternating episodes of desynchronized and synchronized electrical brain activity reflect, respectively, the processing of information within local regions and the transfer of information across regions

    The Mere Exposure Effect in the Domain of Haptics

    Get PDF
    Background: Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities. Methodology/Principal Findings: We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of ‘‘Need for Touch’ ’ data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE. Conclusions/Significance: This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis

    Compensation for Changing Motor Uncertainty

    Get PDF
    When movement outcome differs consistently from the intended movement, errors are used to correct subsequent movements (e.g., adaptation to displacing prisms or force fields) by updating an internal model of motor and/or sensory systems. Here, we examine changes to an internal model of the motor system under changes in the variance structure of movement errors lacking an overall bias. We introduced a horizontal visuomotor perturbation to change the statistical distribution of movement errors anisotropically, while monetary gains/losses were awarded based on movement outcomes. We derive predictions for simulated movement planners, each differing in its internal model of the motor system. We find that humans optimally respond to the overall change in error magnitude, but ignore the anisotropy of the error distribution. Through comparison with simulated movement planners, we found that aimpoints corresponded quantitatively to an ideal movement planner that updates a strictly isotropic (circular) internal model of the error distribution. Aimpoints were planned in a manner that ignored the direction-dependence of error magnitudes, despite the continuous availability of unambiguous information regarding the anisotropic distribution of actual motor errors

    The effects of size changes on haptic object recognition

    Get PDF
    Two experiments examined the effects of size changes on haptic object recognition. In Experiment 1, participants named one of three exemplars (a standard-size-and-shape, different-size, or different-shape exemplar) of 36 categories of real, familiar objects. They then performed an old/new recognition task on the basis of object identity for the standard exemplars of all 36 objects. Half of the participants performed both blocks visually; the other half performed both blocks haptically. The participants were able to efficiently name unusually sized objects haptically, consistent with previous findings of good recognition of small-scale models of stimuli (Lawson, in press). However, performance was impaired for both visual and haptic old/new recognition when objects changed size or shape between blocks. In Experiment 2, participants performed a short-term haptic shapematching task using 3-D plastic models of familiar objects, and as in Experiment 1, a cost emerged for ignoring the irrelevant size change. Like its visual counterpart, haptic object recognition incurs a significant but modest cost for generalizing across size changes. © 2009 The Psychonomic Society, Inc
    corecore