558 research outputs found

    A physiologically inspired model for solving the cocktail party problem.

    Get PDF
    At a cocktail party, we can broadly monitor the entire acoustic scene to detect important cues (e.g., our names being called, or the fire alarm going off), or selectively listen to a target sound source (e.g., a conversation partner). It has recently been observed that individual neurons in the avian field L (analog to the mammalian auditory cortex) can display broad spatial tuning to single targets and selective tuning to a target embedded in spatially distributed sound mixtures. Here, we describe a model inspired by these experimental observations and apply it to process mixtures of human speech sentences. This processing is realized in the neural spiking domain. It converts binaural acoustic inputs into cortical spike trains using a multi-stage model composed of a cochlear filter-bank, a midbrain spatial-localization network, and a cortical network. The output spike trains of the cortical network are then converted back into an acoustic waveform, using a stimulus reconstruction technique. The intelligibility of the reconstructed output is quantified using an objective measure of speech intelligibility. We apply the algorithm to single and multi-talker speech to demonstrate that the physiologically inspired algorithm is able to achieve intelligible reconstruction of an "attended" target sentence embedded in two other non-attended masker sentences. The algorithm is also robust to masker level and displays performance trends comparable to humans. The ideas from this work may help improve the performance of hearing assistive devices (e.g., hearing aids and cochlear implants), speech-recognition technology, and computational algorithms for processing natural scenes cluttered with spatially distributed acoustic objects.R01 DC000100 - NIDCD NIH HHSPublished versio

    Predictive coding and stochastic resonance as fundamental principles of auditory phantom perception

    Get PDF
    Mechanistic insight is achieved only when experiments are employed to test formal or computational models. Furthermore, in analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying healthy auditory perception. With a special focus on tinnitusā€”as the prime example of auditory phantom perceptionā€”we review recent work at the intersection of artificial intelligence, psychology and neuroscience. In particular, we discuss why everyone with tinnitus suffers from (at least hidden) hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that intrinsic neural noise is generated and amplified along the auditory pathway as a compensatory mechanism to restore normal hearing based on adaptive stochastic resonance. The neural noise increase can then be misinterpreted as auditory input and perceived as tinnitus. This mechanism can be formalized in the Bayesian brain framework, where the percept (posterior) assimilates a prior prediction (brainā€™s expectations) and likelihood (bottom-up neural signal). A higher mean and lower variance (i.e. enhanced precision) of the likelihood shifts the posterior, evincing a misinterpretation of sensory evidence, which may be further confounded by plastic changes in the brain that underwrite prior predictions. Hence, two fundamental processing principles provide the most explanatory power for the emergence of auditory phantom perceptions: predictive coding as a top-down and adaptive stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles also play a crucial role in healthy auditory perception. Finally, in the context of neuroscience-inspired artificial intelligence, both processing principles may serve to improve contemporary machine learning techniques

    Perceptual models in speech quality assessment and coding

    Get PDF
    The ever-increasing demand for good communications/toll quality speech has created a renewed interest into the perceptual impact of rate compression. Two general areas are investigated in this work, namely speech quality assessment and speech coding. In the field of speech quality assessment, a model is developed which simulates the processing stages of the peripheral auditory system. At the output of the model a "running" auditory spectrum is obtained. This represents the auditory (spectral) equivalent of any acoustic sound such as speech. Auditory spectra from coded speech segments serve as inputs to a second model. This model simulates the information centre in the brain which performs the speech quality assessment. [Continues.

    Waveguide physical modeling of vocal tract acoustics: flexible formant bandwidth control from increased model dimensionality

    Get PDF
    Digital waveguide physical modeling is often used as an efficient representation of acoustical resonators such as the human vocal tract. Building on the basic one-dimensional (1-D) Kelly-Lochbaum tract model, various speech synthesis techniques demonstrate improvements to the wave scattering mechanisms in order to better approximate wave propagation in the complex vocal system. Some of these techniques are discussed in this paper, with particular reference to an alternative approach in the form of a two-dimensional waveguide mesh model. Emphasis is placed on its ability to produce vowel spectra similar to that which would be present in natural speech, and how it improves upon the 1-D model. Tract area function is accommodated as model width, rather than translated into acoustic impedance, and as such offers extra control as an additional bounding limit to the model. Results show that the two-dimensional (2-D) model introduces approximately linear control over formant bandwidths leading to attainable realistic values across a range of vowels. Similarly, the 2-D model allows for application of theoretical reflection values within the tract, which when applied to the 1-D model result in small formant bandwidths, and, hence, unnatural sounding synthesized vowels

    Investigation of Auditory Encoding and the Use of Auditory Feedback During Speech Production

    Get PDF
    Responses to altered auditory feedback during speech production are highly variable. The extent to which auditory encoding influences this varied use is not well understood. Thirty-nine normal hearing adults completed a first formant (F1) manipulation paradigm where F1 of the vowel /Īµ/ was shifted upwards in frequency towards an /Ʀ/ā€“like vowel in real-time. Frequency following responses (FFRs) and envelope following responses (EFRs) were used to measure neuronal activity to the same vowels produced by the participant and a prototypical talker. Cochlear tuning, measured by SFOAEs and a psychophysical method, was also recorded. Results showed that average F1 production changed to oppose the manipulation. Three metrics of EFR and FFR encoding were evaluated. No reliable relationship was found between speech compensation and evoked response measures or measures of cochlear tuning. Differences in brainstem encoding of vowels and sharpness of cochlear tuning do not appear to explain the variability observed in speech production

    Neurocomputing systems for auditory processing

    Get PDF
    This thesis studies neural computation models and neuromorphic implementations of the auditory pathway with applications to cochlear implants and artiļ¬cial auditory sensory and processing systems. Very low power analogue computation is addressed through the design of micropower analogue building blocks and an auditory preprocessing module targeted at cochlear implants. The analogue building blocks have been fabricated and tested in a standard Complementary Metal Oxide Silicon (CMOS) process. The auditory pre-processing module design is based on the cochlea signal processing mechanisms and low power microelectronic design methodologies. Compared to existing preprocessing techniques used in cochlear implants, the proposed design has a wider dynamic range and lower power consumption. Furthermore, it provides the phase coding as well as the place coding information that are necessary for enhanced functionality in future cochlear implants. The thesis presents neural computation based approaches to a number of signal-processing problems encountered in cochlear implants. Techniques that can improve the performance of existing devices are also presented. Neural network based models for loudness mapping and pattern recognition based channel selection strategies are described. Compared with stateā€”ofā€”theā€”art commercial cochlear implants, the thesis results show that the proposed channel selection model produces superior speech sound qualities; and the proposed loudness mapping model consumes substantially smaller amounts of memory. Aside from the applications in cochlear implants, this thesis describes a biologically plausible computational model of the auditory pathways to the superior colliculus based on current neurophysiological ļ¬ndings. The model encapsulates interaural time difference, interaural spectral difference, monaural pathway and auditory space map tuning in the inferior colliculus. A biologically plausible Hebbian-like learning rule is proposed for auditory space neural map tuning, and a reinforcement learning method is used for map alignment with other sensory space maps through activity independent cues. The validity of the proposed auditory pathway model has been veriļ¬ed by simulation using synthetic data. Further, a complete biologically inspired auditory simulation system is implemented in software. The system incorporates models of the external ear, the cochlea, as well as the proposed auditory pathway model. The proposed implementation can mimic the biological auditory sensory system to generate an auditory space map from 3ā€”D sounds. A large amount of real 3-D sound signals including broadband White noise, click noise and speech are used in the simulation experiments. The eļ¬ect of the auditory space map developmental plasticity is examined by simulating early auditory space map formation and auditory space map alignment with a distorted visual sensory map. Detailed simulation methods, procedures and results are presented

    Digital neuromorphic auditory systems

    Get PDF
    This dissertation presents several digital neuromorphic auditory systems. Neuromorphic systems are capable of running in real-time at a smaller computing cost and consume lower power than on widely available general computers. These auditory systems are considered neuromorphic as they are modelled after computational models of the mammalian auditory pathway and are capable of running on digital hardware, or more specifically on a field-programmable gate array (FPGA). The models introduced are categorised into three parts: a cochlear model, an auditory pitch model, and a functional primary auditory cortical (A1) model. The cochlear model is the primary interface of an input sound signal and transmits the 2D time-frequency representation of the sound to the pitch models as well as to the A1 model. In the pitch model, pitch information is extracted from the sound signal in the form of a fundamental frequency. From the A1 model, timbre information in the form of time-frequency envelope information of the sound signal is extracted. Since the computational auditory models mentioned above are required to be implemented on FPGAs that possess fewer computational resources than general-purpose computers, the algorithms in the models are optimised so that they fit on a single FPGA. The optimisation includes using simplified hardware-implementable signal processing algorithms. Computational resource information of each model on FPGA is extracted to understand the minimum computational resources required to run each model. This information includes the quantity of logic modules, register quantity utilised, and power consumption. Similarity comparisons are also made between the output responses of the computational auditory models on software and hardware using pure tones, chirp signals, frequency-modulated signal, moving ripple signals, and musical signals as input. The limitation of the responses of the models to musical signals at multiple intensity levels is also presented along with the use of an automatic gain control algorithm to alleviate such limitations. With real-world musical signals as their inputs, the responses of the models are also tested using classifiers ā€“ the response of the auditory pitch model is used for the classification of monophonic musical notes, and the response of the A1 model is used for the classification of musical instruments with their respective monophonic signals. Classification accuracy results are shown for model output responses on both software and hardware. With the hardware implementable auditory pitch model, the classification score stands at 100% accuracy for musical notes from the 4th and 5th octaves containing 24 classes of notes. With the hardware implementation auditory timbre model, the classification score is 92% accuracy for 12 classes musical instruments. Also presented is the difference in memory requirements of the model output responses on both software and hardware ā€“ pitch and timbre responses used for the classification exercises use 24 and 2 times less memory space for hardware than software
    • ā€¦
    corecore