508 research outputs found
Searching for a talking face: the effect of degrading the auditory signal
Previous research (e.g. McGurk and MacDonald, 1976) suggests that faces and voices are bound automatically, but recent evidence suggests that attention is involved in a task of searching for a talking face (Alsius and Soto-Faraco, 2011). We hypothesised that the processing demands of the stimuli may affect the amount of attentional resources required, and investigated what effect degrading the auditory stimulus had on the time taken to locate a talking face. Twenty participants were presented with between 2 and 4 faces articulating different sentences, and had to decide which of these faces matched the sentence that they heard. The results showed that in the least demanding auditory condition (clear speech in quiet), search times did not significantly increase when the number of faces increased. However, when speech was presented in background noise or was processed to simulate the information provided by a cochlear implant, search times increased as the number of faces increased. Thus, it seems that the amount of attentional resources required vary according to the processing demands of the auditory stimuli, and when processing load is increased then faces need to be individually attended to in order to complete the task. Based on these results we would expect cochlear-implant users to find the task of locating a talking face more attentionally demanding than normal hearing listeners
Relating approach-to-target and detection tasks in animal psychoacoustics
Psychophysical experiments seek to measure the limits of perception. While straightforward in humans, in animals they are time consuming. Choosing an appropriate task and interpreting measurements can be challenging. We investigated the localization of high-frequency auditory signals in noise using an “approach-to-target” task in ferrets, how task performance should be interpreted in terms of perception, and how the measurements relate to other types of tasks. To establish their general ability to localize, animals were first trained to discriminate broadband noise from 12 locations. Subsequently we tested their ability to discriminate between band-limited targets at 2 or 3 more widely spaced locations, in a continuous background noise. The ability to discriminate between 3 possible locations (−90°, 0°, 90°) of a 10-kHz pure tone decreased gradually over a wide range (>30 dB) of signal-to-noise ratios (SNRs). Location discrimination ability was better for wide band noise targets (0.5 and 2 octave). These results were consistent with localization ability limiting performance for pure tones. Discrimination of pure tones at 2 locations (−90/left, 90/right) was robust at positive SNRs, yielding psychometric functions which fell steeply at negative SNRs. Thresholds for discrimination were similar to previous tone-in-noise thresholds measured in ferrets using a yes/no task. Thus, using an approach-to-target task, sound “localization” in noise can reflect detectability or the ability to localize, depending on the stimulus configuration. Signal-detection-theory-based models were able to account for the results when discriminating between pure tones from 2- and 3-source locations
Recommended from our members
Morphological changes of InGaN epilayers during annealing assessed by spectral analysis of atomic force microscopy images
Revisiting models of concurrent vowel identification: the critical case of no pitch differences
When presented with two vowels simultaneously, humans are often able to identify the constituent vowels. Computational models exist that simulate this ability, however they predict listener confusions poorly, particularly in the case where the two vowels have the same fundamental frequency. Presented here is a model that is uniquely able to predict the combined representation of concurrent vowels. The given model is able to predict listener’s systematic perceptual decisions to a high degree of accuracy
3D Analysis of chromosome architecture: advantages and limitations with SEM
Three-dimensional mitotic plant chromosome architecture can be investigated with the highest resolution with scanning electron microscopy compared to other microscopic techniques at present. Specific chromatin staining techniques making use of simultaneous detection of back-scattered electrons and secondary electrons have provided conclusive information on the distribution of DNA and protein in barley chromosomes through mitosis. Applied to investigate the structural effects of different preparative procedures, these techniques were the groundwork for the ``dynamic matrix model{''} for chromosome condensation, which postulates an energy-dependent process of looping and bunching of chromatin coupled with attachment to a dynamic matrix of associated protein fibers. Data from SEM analysis shows basic higher order chromatin structures: chromomeres and matrix fibers. Visualization of nanogold-labeled phosphorylated histone H3 (ser10) with high resolution on chromomeres shows that functional modifications of chromatin can be located on structural elements in a 3D context. Copyright (C) 2005 S. Karger AG, Basel
Recommended from our members
Decision criterion dynamics in animals performing an auditory detection task
Classical signal detection theory attributes bias in perceptual decisions to a threshold criterion, against which sensory excitation is compared. The optimal criterion setting depends on the signal level, which may vary over time, and about which the subject is naïve. Consequently, the subject must optimise its threshold by responding appropriately to feedback. Here a series of experiments was conducted, and a computational model applied, to determine how the decision bias of the ferret in an auditory signal detection task tracks changes in the stimulus level. The time scales of criterion dynamics were investigated by means of a yes-no signal-in-noise detection task, in which trials were grouped into blocks that alternately contained easy- and hard-to-detect signals. The responses of the ferrets implied both long- and short-term criterion dynamics. The animals exhibited a bias in favour of responding “yes” during blocks of harder trials, and vice versa. Moreover, the outcome of each single trial had a strong influence on the decision at the next trial. We demonstrate that the single-trial and block-level changes in bias are a manifestation of the same criterion update policy by fitting a model, in which the criterion is shifted by fixed amounts according to the outcome of the previous trial and decays strongly towards a resting value. The apparent block-level stabilisation of bias arises as the probabilities of outcomes and shifts on single trials mutually interact to establish equilibrium. To gain an intuition into how stable criterion distributions arise from specific parameter sets we develop a Markov model which accounts for the dynamic effects of criterion shifts. Our approach provides a framework for investigating the dynamics of decisions at different timescales in other species (e.g., humans) and in other psychological domains (e.g., vision, memory
Olivocochlear efferent control in sound localization and experience-dependent learning
Efferent auditory pathways have been implicated in sound localization and its plasticity. We examined the role of the olivocochlear system (OC) in horizontal sound localization by the ferret and in localization learning following unilateral earplugging. Under anesthesia, adult ferrets underwent olivocochlear bundle section at the floor of the fourth ventricle, either at the midline or laterally (left). Lesioned and control animals were trained to localize 1 s and 40 ms amplitude-roved broadband noise stimuli from one of 12 loudspeakers. Neither type of lesion affected normal localization accuracy. All ferrets then received a left earplug and were tested and trained over 10 d. The plug profoundly disrupted localization. Ferrets in the control and lateral lesion groups improved significantly during subsequent training on the 1 s stimulus. No improvement (learning) occurred in the midline lesion group. Markedly poorer performance and failure to learn was observed with the 40 ms stimulus in all groups. Plug removal resulted in a rapid resumption of normal localization in all animals. Insertion of a subsequent plug in the right ear produced similar results to left earplugging. Learning in the lateral lesion group was independent of the side of the lesion relative to the earplug. Lesions in all reported cases were verified histologically. The results suggest the OC system is not needed for accurate localization, but that it is involved in relearning localization during unilateral conductive hearing loss
Recommended from our members
Mode-locked spike trains in responses of ventral cochlear nucleus chopper and onset neurons to periodic stimuli
We report evidence of mode-locking to the envelope of a periodic stimulus in chopper units of the ventral cochlear nucleus (VCN). Mode-locking is a generalized description of how responses in periodically forced nonlinear systems can be closely linked to the input envelope, while showing temporal patterns of higher order than seen during pure phase-locking. Re-analyzing a previously unpublished dataset in response to amplitude modulated tones, we find that of 55% of cells (6/11) demonstrated stochastic mode-locking in response to sinusoidally amplitude modulated (SAM) pure tones at 50% modulation depth. At 100% modulation depth SAM, most units (3/4) showed mode-locking. We use interspike interval (ISI) scattergrams to unravel the temporal structure present in chopper mode-locked responses. These responses compared well to a leaky integrate-and-fire model (LIF) model of chopper units. Thus the timing of spikes in chopper unit responses to periodic stimuli can be understood in terms of the complex dynamics of periodically forced nonlinear systems. A larger set of onset (33) and chopper units (24) of the VCN also shows mode-locked responses to steady-state vowels and cosine-phase harmonic complexes. However, while 80% of chopper responses to complex stimuli meet our criterion for the presence of mode-locking, only 40% of onset cells show similar complex-modes of spike patterns. We found a correlation between a unit’s regularity and its tendency to display mode-locked spike trains as well as a correlation in the number of spikes per cycle and the presence of complex-modes of spike patterns. These spiking patterns are sensitive to the envelope as well as the fundamental frequency of complex sounds, suggesting that complex cell dynamics may play a role in encoding periodic stimuli and envelopes in the VCN
Vascular tissue contractility changes following late gestational exposure to multi-walled carbon nanotubes or their dispersing vehicle in Sprague Dawley rats
Multi-walled carbon nanotubes (MWCNTs) are increasingly used in industry and in nanomedicine raising safety concerns, especially during unique life-stages such as pregnancy. We hypothesized that MWCNT exposure during pregnancy will increase vascular tissue contractile responses by increasing Rho kinase signaling. Pregnant (17-19 gestational days) and non-pregnant Sprague Dawley rats were exposed to 100 μg/kg of MWCNTs by intratracheal instillation or intravenous administration. Vasoactive responses of uterine, mesenteric, aortic and umbilical vessels were studied 24 hours post-exposure by wire myography. The contractile responses of the vessel segments were different between the pregnant and non-pregnant rats, following MWCNT exposure. Maximum stress generation in the uterine artery segments from the pregnant rats following pulmonary MWCNT exposure was increased in response to angiotensin II by 4.9 mN/mm2 (+118%), as compared to the naïve response and by 2.6 mN/mm2 (+40.7%) as compared to the vehicle exposed group. Following MWCNT exposure, serotonin induced approximately 4 mN/mm2 increase in stress generation of the mesenteric artery from both pregnant and non-pregnant rats as compared to the vehicle response. A significant contribution of the dispersion medium was identified as inducing changes in the contractile properties following both pulmonary and intravenous exposure to MWCNTs. Wire myographic studies in the presence of a Rho kinase inhibitor and RhoA and Rho kinase mRNA/protein expression of rat aortic endothelial cells were unaltered following exposure to MWCNTs, suggesting absent/minimal contribution of Rho kinase to the enhanced contractile responses following MWCNT exposure. The reactivity of the umbilical vein was not changed; however, mean fetal weight gain was reduced with dispersion media and MWCNT exposure by both routes. These results suggest a susceptibility of the vasculature during gestation to MWCNT and their dispersion media-induced vasoconstriction, predisposing reduced fetal growth during pregnancy
Visual speech benefit in clear and degraded speech depends on the auditory intelligibility of the talker and the number of background talkers
Perceiving speech in background noise presents a significant challenge to listeners. Intelligibility can be improved by seeing the face of a talker. This is of particular value to hearing impaired people and users of cochlear implants. It is well known that auditory-only speech understanding depends on factors beyond audibility. How these factors impact on the audio-visual integration of speech is poorly understood. We investigated audio-visual integration when either the interfering background speech (Experiment 1) or intelligibility of the target talkers (Experiment 2) was manipulated. Clear speech was also contrasted with sine-wave vocoded speech to mimic the loss of temporal fine structure with a cochlear implant. Experiment 1 showed that for clear speech, the visual speech benefit was unaffected by the number of background talkers. For vocoded speech, a larger benefit was found when there was only one background talker. Experiment 2 showed that visual speech benefit depended upon the audio intelligibility of the talker and increased as intelligibility decreased. Degrading the speech by vocoding resulted in even greater benefit from visual speech information. A single “independent noise” signal detection theory model predicted the overall visual speech benefit in some conditions but could not predict the different levels of benefit across variations in the background or target talkers. This suggests that, similar to audio-only speech intelligibility, the integration of audio-visual speech cues may be functionally dependent on factors other than audibility and task difficulty, and that clinicians and researchers should carefully consider the characteristics of their stimuli when assessing audio-visual integration
- …