11 research outputs found

    The Spatial Release of Cognitive Load in Cocktail Party Is Determined by the Relative Levels of the Talkers

    Get PDF
    In a multi-talker situation, spatial separation between talkers reduces cognitive processing load: this is the “spatial release of cognitive load”. The present study investigated the role played by the relative levels of the talkers on this spatial release of cognitive load. During the experiment, participants had to report the speech emitted by a target talker in the presence of a concurrent masker talker. The spatial separation (0° and 120° angular distance in azimuth) and the relative levels of the talkers (adverse, intermediate, and favorable target-to-masker ratio) were manipulated. The cognitive load was assessed with a prefrontal functional near-infrared spectroscopy. Data from 14 young normal- hearing listeners revealed that the target-to-masker ratio had a direct impact on the spatial release of cognitive load. Spatial separation significantly reduced the prefrontal activity only for the intermediate target-to-masker ratio and had no effect on prefrontal activity for the favorable and the adverse target-to-masker ratios. Therefore, the relative levels of the talkers might be a key point to determine the spatial release of cognitive load and more specifically the prefrontal activity induced by spatial cues in multi- talker environments

    Perceptual factors contribute more than acoustic factors to sound localization abilities with virtual sources

    Get PDF
    International audienceHuman sound localization abilities rely on binaural and spectral cues. Spectral cues arise from interactions between the sound wave and the listener’s body (head related transfer function, HRTF). Large individual differences were reported in localization abilities, even in young normal-hearing adults. Several studies have attempted to determine whether localization abilities depend mostly on acoustic cues or on perceptual processes involved in the analysis of these cues. These studies have yielded inconsistent findings, which could result from methodological issues. In this study, we measured sound localization performance with normal and modified acoustic cues (i.e., with individual and non-individual HRTFs, respectively) in 20 naïve listeners. Test conditions were chosen to address most methodological issues from past studies. Procedural training was provided prior to sound localization tests. The results showed no direct relationship between behavioral results and an acoustical metric (spectral-shape prominence of individual HRTFs). Despite uncertainties due to technical issues, large acoustic differences between individual and non-individual HRTFs seemed to be needed to produce behavioral effects. A subset of 15 listeners then trained in the sound localization task with individual HRTFs. Training included either visual correct-answer feedback (for the test group) or no feedback (for the control group), and was assumed to elicit perceptual learning for the test group only. Few listeners from the control group, but most listeners from the test group, showed significant training-induced learning. For the test group, learning was related to pre-training performance (i.e., the poorer the pre-training performance, the greater the learning amount) and was retained after one month. The results are interpreted as being in favor of a larger contribution of perceptual factors than of acoustic factors to sound localization abilities with virtual sources

    Localisation sonore et capacité d'analyse spectrale de l'auditeur

    No full text
    National audienceLa capacité à localiser une source sonore dans l'espace varie grandement d'un auditeur à l'autre. Plusieurs auteurs ont proposé que cette variabilité pourrait être liée à des différences individuelles dans le traitement perceptif des indices acoustiques de localisation, et plus particulièrement dans le traitement des indices dits « spectraux » , impliqués dans la perception de l'élévation d'une source sonore. Ces indices spectraux proviennent de l'effet de diffraction produit par le relief anatomique de l'auditeur sur l'onde sonore incidente. Ils concernent essentiellement la partie haute fréquence du spectre audible (>3-4 kHz). Or, la variabilité inter-individuelle de la capacités d'analyse spectrale dans cette région fréquentielle est élevée. Nous proposons de tester l'hypothèse selon laquelle les différences individuelles en analyse spectrale pourraient partiellement expliquer celles observées en localisation sonore. Nous avons mesuré la capacité d'analyse spectrale de 18 participants via 2 tests couramment employés dans la littérature: le « ripple reversal » et le « spectral modulation detection ». Ensuite, ces participants ont réalisé des séances de détection et des séances de localisation de stimulus sonores dans le silence et dans le bruit. Les données ainsi recueillies montrent une corrélation significative entre capacité d'analyse spectrale et capacité de localisation sonore

    Localisation sonore dans le bruit : pourquoi une variabilité inter-individuelle ?

    No full text
    National audienceLes précédentes études sur la localisation sonore dans le bruit réalisées chez l'humain ont révélé d'importantes différences individuelles. Une telle variabilité pourrait avoir pour origine l'activité du système olivocochléaire efférent médian. En effet, ce système pourrait être impliqué dans la détection et la localisation de sources sonores dans le bruit ; de plus son activité varie de manière importante d'un individu à l'autre. Nous nous proposons ici d'examiner l'hypothèse selon laquelle les différences inter-individuelles dans l'activité du système olivocochléaire efférent médian pourraient rendre compte des différences inter-individuelles dans la localisation sonore dans le bruit. Plus précisément, nous avons cherché à mettre en évidence une corrélation entre les performances de détection et de localisation de sources sonores dans le bruit (champ diffus) et l'activité du système olivocochléaire efférent médian mesurée par la suppression controlatérale des otoémissions acoustiques provoquées. Nos résultats suggèrent un possible rôle du système olivocochléaire efférent médian dans l'origine de la variabilité inter-individuelle observée en localisation sonore dans le bruit

    How to detect and track chronic neurologic sequelae of COVID-19? Use of auditory brainstem responses and neuroimaging for long-term patient follow-up

    No full text
    This review intends to provide an overview of the current knowledge on neurologic sequelae of COVID-19 and their possible etiology, and, based on available data, proposes possible improvements in current medical care procedures. We conducted a thorough review of the scientific literature on neurologic manifestations of COVID-19, the neuroinvasive propensity of known coronaviruses (CoV) and their possible effects on brain structural and functional integrity. It appears that around one third of COVID-19 patients admitted to intensive care units (ICU) for respiratory difficulties exhibit neurologic symptoms. This may be due to progressive brain damage and dysfunction triggered by severe hypoxia and hypoxemia, heightened inflammation and SARS-CoV-2 dissemination into brain parenchyma, as suggested by current reports and analyses of previous CoV outbreaks. Viral invasion of the brain may particularly target and alter brainstem and thalamic functions and, consequently, result in sensorimotor dysfunctions and psychiatric disorders. Moreover, data collected from other structurally homologous CoV suggest that SARS-CoV-2 infection may lead to brain cell degeneration and demyelination similar to multiple sclerosis (MS). Hence, current evidence warrants further evaluation and long-term follow-up of possible neurologic sequelae in COVID-19 patients. It may be particularly relevant to evaluate brainstem integrity in recovered patients, as it is suspected that this cerebral area may particularly be dysfunctional following SARS-CoV-2 infection. Because CoV infection can potentially lead to chronic neuroinflammation and progressive demyelination, neuroimaging features and signs of MS may also be evaluated in the long term in recovered COVID-19 patients

    Auditory efferents facilitate sound localization in noise in humans

    No full text
    International audienceThe mammalian auditory system contains descending neural pathways, some of which project onto the cochlea via the medial olivocochlear (MOC) system. The function of this efferent auditory system is not entirely clear. Behavioral studies in animals with OC lesions suggest that the MOC serves to facilitate sound localization in noise. In the current work, noise-induced OC activity (the “OC reflex”) and sound-localization performance in noise were measured in normal-hearing humans. Consistent with earlier studies, both measures were found to vary substantially across individuals. Importantly, significant correlations were observed between OC reflex strength and the effect of noise on sound-localization performance; the stronger the OC reflex, the less marked the effect of noise. These results suggest that MOC activation by noise helps to counteract the detrimental effects of background noise on neural representations of direction-dependent spectral features, which are especially important for accurate localization in the up/down and front/back dimensions
    corecore