121 research outputs found

    Hemispheric Differences in the Recognition of Environmental Sounds

    Get PDF
    In the visual domain, Marsolek and colleagues have found support for two dissociable and parallel neural subsystems underlying object and shape recognition: an abstract category subsystem that operates more effectively in the left cerebral hemisphere (LH), and a specific-exemplar subsystem that operates more effectively in the right cerebral hemisphere (RH). Evidence of this asymmetry has been observed for linguistic (words, pseudo-word forms) and non-linguistic (objects) stimuli. In the auditory domain, the authors previously found hemispheric asymmetries in priming effects when linguistic stimuli (spoken words) were used. In the present study, hemispheric asymmetries were investigated for non-linguistic stimuli (environmental sounds) by means of four long-term repetition-priming experiments. The results support the dissociable-subsystems theory, showing specificity effects when sounds were presented to the left ear (RH), but not when presented to the right ear (LH). Theoretical implications are discussed

    Neural plasticity associated with recently versus often heard objects.

    Get PDF
    In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures

    Automatic Environmental Sound Recognition: Performance versus Computational Cost

    Get PDF
    In the context of the Internet of Things (IoT), sound sensing applications are required to run on embedded platforms where notions of product pricing and form factor impose hard constraints on the available computing power. Whereas Automatic Environmental Sound Recognition (AESR) algorithms are most often developed with limited consideration for computational cost, this article seeks which AESR algorithm can make the most of a limited amount of computing power by comparing the sound classification performance em as a function of its computational cost. Results suggest that Deep Neural Networks yield the best ratio of sound classification accuracy across a range of computational costs, while Gaussian Mixture Models offer a reasonable accuracy at a consistently small cost, and Support Vector Machines stand between both in terms of compromise between accuracy and computational cost

    Non-Verbal Auditory Cognition in Patients with Temporal Epilepsy Before and After Anterior Temporal Lobectomy

    Get PDF
    For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL) – i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri – is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE) has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits

    Створення системи розпізнавання звуків з використанням глибинних нейронних мереж

    Get PDF
    Дипломна робота: 81 с., 21 рис., 6 табл., 2 дод., 23 джерела. Об’єкт дослідження – задача розпізнавання звуків навколишнього середовища. Предмет дослідження – методи розпізнавання звуків навколишнього середовища за допомогою глибинних нейронних мереж. Мета даної роботи полягає у розробці алгоритму, створенні програми та її оцінка як системи розпізнавання звуку на основі нейронних мереж та створеного набору даних. Важливість систем розпізнавання звуку полягає в їхньому потенціалі для покращення автоматизації, взаємодії з користувачем, безпеки, забезпечення інтелектуального прийняття рішень. Результатом дипломної роботи є створення програмного продукту розпізнавання звуків. Програмний продукт реалізовано за допомогою мови програмування Python.Thesis: 81 pp., 21 figures, 6 tables, 2 appendices, 23 sources. The object of research is the task of recognizing environmental sounds. The subject of research are the methods of recognition of environmental sounds using deep neural networks. The purpose of this work is to develop an algorithm, create a program and evaluate it as a sound recognition system based on neural networks and the created dataset. The importance of voice recognition systems lies in their potential to improve automation, user interaction, security, and enable intelligent decision-making. The result of the thesis is the creation of a sound recognition software product. The software product is implemented using the Python programming language

    Bilateral Cochlear Implants, Minimizing Auditory Rehabilitation

    Get PDF
    Bilateral cochlear implant has increased in recent years due to benefits such as the location of the sign, decrease in head shadow effect, and binaural summation. The aim of this chapter is to discuss issues related to the bilateral cochlear implant costs and benefits and its reflections on auditory rehabilitation, allowing the reader to do a search and strengthen it scientifically with this issue, giving theoretical foundation to better guide and advise their patients

    Classification of Known and Unknown Environmental Sounds Based on Self-Organized Space Using a Recurrent Neural Network

    Get PDF
    Our goal is to develop a system to learn and classify environmental sounds for robots working in the real world. In the real world, two main restrictions pertain in learning. (i) Robots have to learn using only a small amount of data in a limited time because of hardware restrictions. (ii) The system has to adapt to unknown data since it is virtually impossible to collect samples of all environmental sounds. We used a neuro-dynamical model to build a prediction and classification system. This neuro-dynamical model can self-organize sound classes into parameters by learning samples. The sound classification space, constructed by these parameters, is structured for the sound generation dynamics and obtains clusters not only for known classes, but also unknown classes. The proposed system searches on the basis of the sound classification space for classifying. In the experiment, we evaluated the accuracy of classification for both known and unknown sound classes

    Real-time aircraft noise likeness detector

    Get PDF
    One of the most difficult tasks involved in the process of noise monitoring near airports is related to the automatic detection and classification of aircraft noise events. These tasks can be solved by applying pattern recognition techniques to the audio signal captured by a microphone. But now the problem is caused by the background noise, which is present in real environments. This paper proposes a real-time method for continuously tracking the similarity of the input sound and the aircraft’s sounds. Using these facilities, the monitoring unit will be able to mark aircraft events, or to make measurements only when aircraft sound is louder than background noise. A one-class approach has been applied to this detection-by-classification method. Using the default setup, 93% of the aircraft’s events which held an SNR of 6–8 dB were detected, for 30 different locations with diverse soundscapes
    corecore