130 research outputs found

    Attention, Awareness, and the Perception of Auditory Scenes

    Get PDF
    Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences

    Intuitive Control of Scraping and Rubbing Through Audio-tactile Synthesis

    Full text link
    Intuitive control of synthesis processes is an ongoing challenge within the domain of auditory perception and cognition. Previous works on sound modelling combined with psychophysical tests have enabled our team to develop a synthesizer that provides intuitive control of actions and objects based on semantic descriptions for sound sources. In this demo we present an augmented version of the synthesizer in which we added tactile stimulations to increase the sensation of true continuous friction interactions (rubbing and scratching) with the simulated objects. This is of interest for several reasons. Firstly, it enables to evaluate the realism of our sound model in presence of stimulations from other modalities. Secondly it enables to compare tactile and auditory signal structures linked to the same evocation, and thirdly it provides a tool to investigate multimodal perception and how stimulations from different modalities should be combined to provide realistic user interfaces

    A specific relationship between musical sophistication and auditory working memory

    Get PDF
    Previous studies have found conflicting results between individual measures related to music and fundamental aspects of auditory perception and cognition. The results have been difficult to compare because of different musical measures being used and lack of uniformity in the auditory perceptual and cognitive measures. In this study we used a general construct of musicianship, musical sophistication, that can be applied to populations with widely different backgrounds. We investigated the relationship between musical sophistication and measures of perception and working memory for sound by using a task suitable to measure both. We related scores from the Goldsmiths Musical Sophistication Index to performance on tests of perception and working memory for two acoustic features-frequency and amplitude modulation. The data show that musical sophistication scores are best related to working memory for frequency in an analysis that accounts for age and non-verbal intelligence. Musical sophistication was not significantly associated with working memory for amplitude modulation rate or with the perception of either acoustic feature. The work supports a specific association between musical sophistication and working memory for sound frequency

    Auditory pitch perception in autism spectrum disorder is associated with superior non-verbal abilities

    Full text link
    Le Trouble du Spectre Autistique (TSA) est souvent caractérisé par un profil auditif atypique et des atteintes au niveau du langage. Des études antérieures examinant la perception auditive simple et complexe dans les TSA et le développement typique présentent des conclusions mitigées quant à la nature des profils auditifs des deux groupes. De plus, des données contradictoires ont été rapportées en termes d’aptitudes cognitives chez les personnes atteintes de TSA. En conséquence, la relation qui existe entre la perception auditive et les habiletés verbales et non-verbales chez les TSA demeure mal comprise. En conséquence, cette étude cherche à mieux comprendre la relation entre le traitement du son et les aptitudes cognitives, en visant de comparer des enfants atteints de TSA à des enfants au développement typique. Dans la présente étude, les participants ont effectué une tâche auditive à bas-niveau et une tâche auditive mélodique à haut-niveau. Les capacités cognitives verbales et non-verbales ont été mesurées à l’aide des composantes du Wechsler Abbreviated Scale of Intelligence (WASI), un test de quotient intellectuel (QI). Les deux groupes ont obtenu des résultats similaires sur les deux tâches auditives ainsi que sur les mesures de QI. De plus, cette étude a démontré que les habiletés verbales ne permettent pas de prédire la performance sur la tâche auditive à bas-niveau ou sur la tâche auditive à haut-niveau dans les deux groupes. Cependant, les habiletés non-verbales semblent prédire une meilleure perception auditive sur les deux tâches auditives, et ce, pour les deux groupes. Ces résultats soulignent la présence d’habiletés auditives intactes dans un échantillon d’enfants atteints de TSA ayant un QI qui se situe dans la moyenne. De plus, l’étude actuelle met en évidence une relation entre la perception auditive et le raisonnement non-verbal, plutôt que le raisonnement verbal. Ainsi, les résultats de cette étude permettent d’approfondir la connaissance sur les différences individuelles qui existent dans la perception auditive auprès des personnes atteintes de TSA dans les contextes verbales et non-verbales, pour enfin contribuer à une meilleure caractérisation du phénotype du TSA.Autism Spectrum Disorder (ASD) is often characterized by atypical sensory perception and cognitive profiles. However, previous studies have found mixed findings with regard to auditory processing in ASD. Discrepant findings have been reported in terms of cognitive abilities in ASD. Accordingly, auditory perception and its relation to verbal and non-verbal cognitive abilities in ASD remains poorly understood. The objective of the present research was to examine the association between auditory pitch processing and verbal and non-verbal cognitive abilities in children with ASD, compared with age- and IQ-matched typically developing (TD) children. Participants were 17 children with ASD and 19 TD children, matched on age and IQ. Participants were tested on performed a low-level pitch direction task and a higher-level melodic pitch global-local task. Verbal and non-verbal cognitive abilities were measured using the Verbal IQ and Performance IQ components of the Wechsler Abbreviated Scale of Intelligence (WASI). No group differences in performance were found on either auditory task or IQ measure. Furthermore, verbal abilities did not predict performance on the auditory tasks in either group. However, non-verbal abilities predicted performance on both of the auditory tasks in ASD and TD. This work contributes to a better understanding of sensory processing and cognitive reasoning in children with ASD and typically-developing children. Specifically, these results indicate that tonal pitch-based auditory processing is preserved in individuals with ASD with average IQ. These findings also suggest that auditory perception is related to non-verbal reasoning rather than verbal abilities in both ASD and TD, implying that there may be common perceptual-cognitive profiles in these subgroups of children with ASD that are similar to typical development. Accordingly, this work supports the idea that some individuals with ASD have ‘islets of ability’ amidst their sensory and cognitive difficulties. These results motivate future studies to examine whether similar perceptual-cognitive associations might be observed in a broader sample of individuals with ASD, such as those with language impairment or lower IQ

    Urban Air Mobility System Testbed Using CAVE Virtual Reality Environment

    Get PDF
    Urban Air Mobility (UAM) refers to a system of air passenger and small cargo transportation within an urban area. The UAM framework also includes other urban Unmanned Aerial Systems (UAS) services that will be supported by a mix of onboard, ground, piloted, and autonomous operations. Over the past few years UAM research has gained wide interest from companies and federal agencies as an on-demand innovative transportation option that can help reduce traffic congestion and pollution as well as increase mobility in metropolitan areas. The concepts of UAM/UAS operation in the National Airspace System (NAS) remains an active area of research to ensure safe and efficient operations. With new developments in smart vehicle design and infrastructure for air traffic management, there is a need for methods to integrate and test various components of the UAM framework. In this work, we report on the development of a virtual reality (VR) testbed using the Cave Automatic Virtual Environment (CAVE) technology for human-automation teaming and airspace operation research of UAM. Using a four-wall projection system with motion capture, the CAVE provides an immersive virtual environment with real-time full body tracking capability. We created a virtual environment consisting of San Francisco city and a vertical take-off-and-landing passenger aircraft that can fly between a downtown location and the San Francisco International Airport. The aircraft can be operated autonomously or manually by a single pilot who maneuvers the aircraft using a flight control joystick. The interior of the aircraft includes a virtual cockpit display with vehicle heading, location, and speed information. The system can record simulation events and flight data for post-processing. The system parameters are customizable for different flight scenarios; hence, the CAVE VR testbed provides a flexible method for development and evaluation of UAM framework

    The Use of the Gaps-In-Noise Test as an Index of the Enhanced Left Temporal Cortical Thinning Associated with the Transition between Mild Cognitive Impairment and Alzheimer's Disease

    Get PDF
    Background: The known link between auditory perception and cognition is often overlooked when testing for cognition. Purpose: To evaluate auditory perception in a group of older adults diagnosed with mild cognitive impairment (MCI). Research Design: A cross-sectional study of auditory perception. Study Sample: Adults with MCI and adults with no documented cognitive issues and matched hearing sensitivity and age. Data collection: Auditory perception was evaluated in both groups, assessing for hearing sensitivity, speech in babble (SinB), and temporal resolution. Results: Mann‐Whitney test revealed significantly poorer scores for SinB and temporal resolution abilities of MCIs versus normal controls for both ears. The right-ear gap detection thresholds on the Gaps-In-Noise (GIN) Test clearly differentiated between the two groups (p < 0.001), with no overlap of values. The left ear results also differentiated the two groups (p < 0.01); however, there was a small degree of overlap ∼8-msec threshold values. With the exception of the left-ear inattentiveness index, which showed a similar distribution between groups, both impulsivity and inattentiveness indexes were higher for the MCIs compared to the control group. Conclusions: The results support central auditory processing evaluation in the elderly population as a promising tool to achieve earlier diagnosis of dementia, while identifying central auditory processing deficits that can contribute to communication deficits in the MCI patient population. A measure of temporal resolution (GIN) may offer an early, albeit indirect, measure reflecting left temporal cortical thinning associated with the transition between MCI and Alzheimer’s disease

    Neuromorphic Detection of Vowel Representation Spaces

    Get PDF
    In this paper a layered architecture to spot and characterize vowel segments in running speech is presented. The detection process is based on neuromorphic principles, as is the use of Hebbian units in layers to implement lateral inhibition, band probability estimation and mutual exclusion. Results are presented showing how the association between the acoustic set of patterns and the phonologic set of symbols may be created. Possible applications of this methodology are to be found in speech event spotting, in the study of pathological voice and in speaker biometric characterization, among others
    corecore