129,640 research outputs found

    Integration and Segregation in Audition and Vision

    Get PDF
    Perceptual systems can improve their performance by integrating relevant perceptual information and segregating away irrelevant information. Three studies exploring perceptual integration and segregation in audition and vision are reported in this thesis. In Chapter 1, we explore the role of similarity in informational masking. In informational masking tasks, listeners detect the presence of a signal tone presented simultaneously with a random-frequency multitone masker. Detection thresholds are high in the presence of an informational masker, even though listeners should be able to ignore the masker frequencies. The informational masker\u27s effect may be due to the similarity between signal and masker components. We used a behavioral measure to demonstrate that the amount of frequency change over time could be the stimulus dimension underlying the similarity effect. In Chapter 2, we report a set of experiments on the visual system\u27s ability to discriminate distributions of luminances. The distribution of luminances can serve as a cue to the presence of multiple illuminants in a scene. We presented observers with simple achromatic scenes with patches drawn from one or two luminance distributions. Performance depended on the number of patches from the second luminance distribution, as well as knowledge of the location of these patches. Irrelevant geometric cues, which we expected to negatively affect performance, did not have an effect. An ideal observer model and a classification analysis showed that observers successfully integrated information provided by the image photometric cues. In Chapter 3, we investigated the role of photometric and geometric cues in lightness perception. We rendered achromatic scenes that were consistent with two oriented background context surfaces illuminated by a light source with a directional component. Observers made lightness matches to tabs rendered at different orientations in the scene. We manipulated the photometric cues by changing the intensity of the illumination, and the geometric cues by changing the orientation of the context surfaces. Observers\u27 matches varied with both manipulations, demonstrating that observers used both types of cues to account for the illumination in the scene. The two types of cues were found to have independent effects on the lightness matches

    Pre-stimulus influences on auditory perception arising from sensory representations and decision processes

    Get PDF
    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task

    Resonant Neural Dynamics of Speech Perception

    Full text link
    What is the neural representation of a speech code as it evolves in time? How do listeners integrate temporally distributed phonemic information across hundreds of milliseconds, even backwards in time, into coherent representations of syllables and words? What sorts of brain mechanisms encode the correct temporal order, despite such backwards effects, during speech perception? How does the brain extract rate-invariant properties of variable-rate speech? This article describes an emerging neural model that suggests answers to these questions, while quantitatively simulating challenging data about audition, speech and word recognition. This model includes bottom-up filtering, horizontal competitive, and top-down attentional interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech and word recognition code is suggested to be a resonant wave of activation across such a network, and a percept of silence is proposed to be a temporal discontinuity in the rate with which such a resonant wave evolves. Properties of these resonant waves can be traced to the brain mechanisms whereby auditory, speech, and language representations are learned in a stable way through time. Because resonances are proposed to control stable learning, the model is called an Adaptive Resonance Theory, or ART, model.Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-01-1-0624)

    Sequence effects in categorization of simple perceptual stimuli

    Get PDF
    Categorization research typically assumes that the cognitive system has access to a (more or less noisy) representation of the absolute magnitudes of the properties of stimuli and that this information is used in reaching a categorization decision. However, research on identification of simple perceptual stimuli suggests that people have very poor representations of absolute magnitude information and that judgments about absolute magnitude are strongly influenced by preceding material. The experiments presented here investigate such sequence effects in categorization tasks. Strong sequence effects were found. Classification of a borderline stimulus was more accurate when preceded by a distant member of the opposite category than by a distant member of the same category. It is argued that this category contrast effect cannot be accounted for by extant exemplar or decision-bound models of categorization. The effect suggests the use of relative magnitude information in categorization. A memory and contrast model illustrates how relative magnitude information may be used in categorization

    An Investigation of the Effects of Categorization and Discrimination Training on Auditory Perceptual Space

    Full text link
    Psychophysical phenomena such as categorical perception and the perceptual magnet effect indicate that our auditory perceptual spaces are warped for some stimuli. This paper investigates the effects of two different kinds of training on auditory perceptual space. It is first shown that categorization training, in which subjects learn to identify stimuli within a particular frequency range as members of the same category, can lead to a decrease in sensitivity to stimuli in that category. This phenomenon is an example of acquired similarity and apparently has not been previously demonstrated for a category-relevant dimension. Discrimination training with the same set of stimuli was shown to have the opposite effect: subjects became more sensitive to differences in the stimuli presented during training. Further experiments investigated some of the conditions that are necessary to generate the acquired similarity found in the first experiment. The results of these experiments are used to evaluate two neural network models of the perceptual magnet effect. These models, in combination with our experimental results, are used to generate an experimentally testable hypothesis concerning changes in the brain's auditory maps under different training conditions.Alfred P. Sloan Foundation and the National institutes of Deafness and other Communication Disorders (R29 02852); Air Force Office of Scientific Research (F49620-98-1-0108

    Use of Linear Perspective Scene Cues in a Simulated Height Regulation Task

    Get PDF
    As part of a long-term effort to quantify the effects of visual scene cuing and non-visual motion cuing in flight simulators, an experimental study of the pilot's use of linear perspective cues in a simulated height-regulation task was conducted. Six test subjects performed a fixed-base tracking task with a visual display consisting of a simulated horizon and a perspective view of a straight, infinitely-long roadway of constant width. Experimental parameters were (1) the central angle formed by the roadway perspective and (2) the display gain. The subject controlled only the pitch/height axis; airspeed, bank angle, and lateral track were fixed in the simulation. The average RMS height error score for the least effective display configuration was about 25% greater than the score for the most effective configuration. Overall, larger and more highly significant effects were observed for the pitch and control scores. Model analysis was performed with the optimal control pilot model to characterize the pilot's use of visual scene cues, with the goal of obtaining a consistent set of independent model parameters to account for display effects

    Perceptual thresholds for the effects of room modes as a function of modal decay

    Get PDF
    Room modes cause audible artefacts in listening environments. Modal control approaches have emerged in scientific literature over the years and, often, their performance is measured by criteria that may be perceptually unfounded. Previous research has shown modal decay as a key perceptual factor in detecting modal effects. In this work, perceptual thresholds for the effects of modes as a function of modal decay have been measured in the region between 32Hz and 250Hz. A test methodology has been developed to include modal interaction and temporal masking from musical events, which are important aspects in recreating an ecologically valid test regime. This method has been deployed in addition to artificial test stimuli traditionally used in psychometric studies, which provide unmasked, absolute thresholds. For artificial stimuli, thresholds decrease monotonically from 0.9 seconds at 32 Hz to 0.17 seconds at 200 Hz, with a knee at 63 Hz. For music stimuli, thresholds decrease monotonically from 0.51 seconds at 63 Hz to 0.12 seconds at 250 Hz. Perceptual thresholds are shown to be dependent on frequency and to a much lesser extent on level. Results presented here define absolute and practical thresholds, which are useful as perceptually relevant optimization targets for modal control methods

    Visual Aftereffect Of Texture Density Contingent On Color Of Frame

    Get PDF
    An aftereffect of perceived texture density contingent on the color of a surrounding region is reported. In a series of experiments, participants were adapted, with fixation, to stimuli in which the relative density of two achromatic texture regions was perfectly correlated with the color presented in a surrounding region. Following adaptation, the perceived relative density of the two regions was contingent on the color of the surrounding region or of the texture elements themselves. For example, if high density on the left was correlated with a blue surround during adaptation (and high density on the right with a yellow surround), then in order for the left and right textures to appear equal in the assessment phase, denser texture was required on the left in the presence of a blue surround (and denser texture on the right in the context of a yellow surround). Contingent aftereffects were found (1) with black-and-white scatter-dot textures, (2) with luminance-balanced textures, and (3) when the texture elements, rather than the surrounds, were colored during assessment. Effect size was decreased when the elements themselves were colored, but also when spatial subportions of the surround were used for the presentation of color. The effect may be mediated by retinal color spreading (PĂśppel, 1986) and appears consistent with a local associative account of contingent aftereffects, such as Barlow\u27s (1990) model of modifiable inhibition
    • …
    corecore