59 research outputs found

    A computational framework for sound segregation in music signals

    Get PDF
    Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    otorhinolaryngology; neurosciences; hearin

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    ​The International Symposium on Hearing is a prestigious, triennial gathering where world-class scientists present and discuss the most recent advances in the field of human and animal hearing research. The 2015 edition will particularly focus on integrative approaches linking physiological, psychophysical and cognitive aspects of normal and impaired hearing. Like previous editions, the proceedings will contain about 50 chapters ranging from basic to applied research, and of interest to neuroscientists, psychologists, audiologists, engineers, otolaryngologists, and artificial intelligence researchers.

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF

    Auditory Streaming: Behavior, Physiology, and Modeling

    Get PDF
    Auditory streaming is a fundamental aspect of auditory perception. It refers to the ability to parse mixed acoustic events into meaningful streams where each stream is assumed to originate from a separate source. Despite wide interest and increasing scientific investigations over the last decade, the neural mechanisms underlying streaming still remain largely unknown. A simple example of this mystery concerns the streaming of simple tone sequences, and the general assumption that separation along the tonotopic axis is sufficient for stream segregation. However, this dissertation research casts doubt on the validity of this assumption. First, behavioral measures of auditory streaming in ferrets prove that they can be used as an animal model to study auditory streaming. Second, responses from neurons in the primary auditory cortex (A1) of ferrets show that spectral components that are well-separated in frequency produce comparably segregated responses along the tonotopic axis, no matter whether presented synchronously or consecutively, despite the substantial differences in their streaming percepts when measured psychoacoustically in humans. These results argue against the notion that tonotopic separation per se is a sufficient neural correlate of stream segregation. Thirdly, comparing responses during behavior to those during the passive condition, the temporal correlations of spiking activity between neurons belonging to the same stream display an increased correlation, while responses among neurons belonging to different streams become less correlated. Rapid task-related plasticity of neural receptive fields shows a pattern that is consistent with the changes in correlation. Taken together these results indicate that temporal coherence is a plausible neural correlate of auditory streaming. Finally, inspired by the above biological findings, we propose a computational model of auditory scene analysis, which uses temporal coherence as the primary criterion for predicting stream formation. The promising results of this dissertation research significantly advance our understanding of auditory streaming and perception

    Neural Basis and Computational Strategies for Auditory Processing

    Get PDF
    Our senses are our window to the world, and hearing is the window through which we perceive the world of sound. While seemingly effortless, the process of hearing involves complex transformations by which the auditory system consolidates acoustic information from the environment into perceptual and cognitive experiences. Studies of auditory processing try to elucidate the mechanisms underlying the function of the auditory system, and infer computational strategies that are valuable both clinically and intellectually, hence contributing to our understanding of the function of the brain. In this thesis, we adopt both an experimental and computational approach in tackling various aspects of auditory processing. We first investigate the neural basis underlying the function of the auditory cortex, and explore the dynamics and computational mechanisms of cortical processing. Our findings offer physiological evidence for a role of primary cortical neurons in the integration of sound features at different time constants, and possibly in the formation of auditory objects. Based on physiological principles of sound processing, we explore computational implementations in tackling specific perceptual questions. We exploit our knowledge of the neural mechanisms of cortical auditory processing to formulate models addressing the problems of speech intelligibility and auditory scene analysis. The intelligibility model focuses on a computational approach for evaluating loss of intelligibility, inspired from mammalian physiology and human perception. It is based on a multi-resolution filter-bank implementation of cortical response patterns, which extends into a robust metric for assessing loss of intelligibility in communication channels and speech recordings. This same cortical representation is extended further to develop a computational scheme for auditory scene analysis. The model maps perceptual principles of auditory grouping and stream formation into a computational system that combines aspects of bottom-up, primitive sound processing with an internal representation of the world. It is based on a framework of unsupervised adaptive learning with Kalman estimation. The model is extremely valuable in exploring various aspects of sound organization in the brain, allowing us to gain interesting insight into the neural basis of auditory scene analysis, as well as practical implementations for sound separation in ``cocktail-party'' situations
    corecore