1,522 research outputs found

    Listening for Sirens: Locating and Classifying Acoustic Alarms in City Scenes

    Get PDF
    This paper is about alerting acoustic event detection and sound source localisation in an urban scenario. Specifically, we are interested in spotting the presence of horns, and sirens of emergency vehicles. In order to obtain a reliable system able to operate robustly despite the presence of traffic noise, which can be copious, unstructured and unpredictable, we propose to treat the spectrograms of incoming stereo signals as images, and apply semantic segmentation, based on a Unet architecture, to extract the target sound from the background noise. In a multi-task learning scheme, together with signal denoising, we perform acoustic event classification to identify the nature of the alerting sound. Lastly, we use the denoised signals to localise the acoustic source on the horizon plane, by regressing the direction of arrival of the sound through a CNN architecture. Our experimental evaluation shows an average classification rate of 94%, and a median absolute error on the localisation of 7.5{\deg} when operating on audio frames of 0.5s, and of 2.5{\deg} when operating on frames of 2.5s. The system offers excellent performance in particularly challenging scenarios, where the noise level is remarkably high.Comment: 6 pages, 9 figure

    Poor neural and perceptual phoneme discrimination during acoustic variation in dyslexia

    Get PDF
    Whereas natural acoustic variation in speech does not compromise phoneme discrimination in healthy adults, it was hypothesized to be a challenge for developmental dyslexics. We investigated dyslexics’ neural and perceptual discrimination of native language phonemes during acoustic variation. Dyslexics and non-dyslexics heard /æ/ and /i/ phonemes in a context with fo variation and then in a context without it. Mismatch negativity (MMN) and P3a responses to phoneme changes were recorded with electroencephalogram to compare groups during ignore and attentive listening. perceptual phoneme discrimination in the variable context was evaluated with hit-ratios and reaction times. MMN/N2bs were diminished in dyslexics in the variable context. Hit-ratios were smaller in dyslexics than controls. MMNs did not differ between groups in the context without variation. These results suggest that even distinctive vowels are challenging to discriminate for dyslexics when the context resembles natural variability of speech. This most likely reflects poor categorical perception of phonemes in dyslexics. Difficulties to detect linguistically relevant invariant information during acoustic variation in speech may contribute to dyslexics’ deficits in forming native language phoneme representations during infancy. Future studies should acknowledge that simple experimental paradigms with repetitive stimuli can be insensitive to dyslexics’ speech processing deficits.Peer reviewe

    Self-Supervised Contrastive Learning for Unsupervised Phoneme Segmentation

    Full text link
    We propose a self-supervised representation learning model for the task of unsupervised phoneme boundary detection. The model is a convolutional neural network that operates directly on the raw waveform. It is optimized to identify spectral changes in the signal using the Noise-Contrastive Estimation principle. At test time, a peak detection algorithm is applied over the model outputs to produce the final boundaries. As such, the proposed model is trained in a fully unsupervised manner with no manual annotations in the form of target boundaries nor phonetic transcriptions. We compare the proposed approach to several unsupervised baselines using both TIMIT and Buckeye corpora. Results suggest that our approach surpasses the baseline models and reaches state-of-the-art performance on both data sets. Furthermore, we experimented with expanding the training set with additional examples from the Librispeech corpus. We evaluated the resulting model on distributions and languages that were not seen during the training phase (English, Hebrew and German) and showed that utilizing additional untranscribed data is beneficial for model performance.Comment: Interspeech 2020 pape

    Getting back on the beat: links between auditory-motor integration and precise auditory processing at fast time scales

    Get PDF
    The auditory system is unique in its ability to precisely detect the timing of perceptual events and use this information to update motor plans, a skill crucial for language. The characteristics of the auditory system which enable this temporal precision, however, are only beginning to be understood. Previous work has shown that participants who can tap consistently to a metronome have neural responses to sound with greater phase coherence from trial to trial. We hypothesized that this relationship is driven by a link between the updating of motor output by auditory feedback and neural precision. Moreover, we hypothesized that neural phase coherence at both fast time scales (reflecting subcortical processing) and slow time scales (reflecting cortical processing) would be linked to auditory-motor timing integration. To test these hypotheses we asked participants to synchronize to a pacing stimulus and then changed either the tempo or the timing of the stimulus to assess whether they could rapidly adapt. Participants who could rapidly and accurately resume synchronization had neural responses to sound with greater phase coherence. However, this precise timing was limited to the time scale of 10 ms (100 Hz) or faster; neural phase coherence at slower time scales was unrelated to performance on this task. Auditory-motor adaptation, therefore, specifically depends upon consistent auditory processing at fast, but not slow, time scales

    The eyes have it

    Get PDF

    An empirical study of embodied music listening, and its applications in mediation technology

    Get PDF
    • …
    corecore