27 research outputs found

    Communications Biophysics

    Get PDF
    Contains research objectives and summary of research on nine research projects split into four sections.National Institutes of Health (Grant 5 ROI NS11000-03)National Institutes of Health (Grant 1 P01 NS13126-01)National Institutes of Health (Grant 1 RO1 NS11153-01)National Institutes of Health (Grant 2 R01 NS10916-02)Harvard-M.I.T. Rehabilitation Engineering CenterU. S. Department of Health, Education, and Welfare (Grant 23-P-55854)National Institutes of Health (Grant 1 ROl NS11680-01)National Institutes of Health (Grant 5 ROI NS11080-03)M.I.T. Health Sciences Fund (Grant 76-07)National Institutes of Health (Grant 5 T32 GM07301-02)National Institutes of Health (Grant 5 TO1 GM01555-10

    An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity

    Get PDF
    Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities–0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking

    Communications Biophysics

    Get PDF
    Contains research objectives and summary of research on thirteen research projects split into four section.National Institutes of Health (Grant 1 RO1 NS10737-01)National Institutes of Health (Grant 1 ROI NS10916-01)National Institutes of Health (Grant 5 RO1 NS11000-02)National Institutes of Health (Grant 1 RO1 NS11153-01)Harvard M.I.T. Rehabilitation Engineering CenterU. S. Department of Health, Education, and Welfare, Grant 23-P-55854National Institutes of Health (Grant 1 RO1 NS11680-01)Norlin Music, Inc.Clarence J. LeBel FundNational Institutes of Health (Grant 1 RO1 NS11080-01A1)National Institutes of Health (Grant 5 TO1 GM01555-08)M.I.T. Health Sciences FundBoston City Hospital Purchase Order 1176-05-21335-C

    Communications Biophysics

    Get PDF
    Contains research objectives, summary of research and reports on three research projects.National Institutes of Health (Grant 5 PO1 GM14940-04)National Institutes of Health (Grant 5 TOl GM01555-04)National Aeronautics and Space Administration (Grant NGL 22-009-304

    Three-dimensional point-cloud room model in room acoustics simulations

    Get PDF

    Calibration of sound source localisation for robots using multiple adaptive filter models of the cerebellum

    Get PDF
    The aim of this research was to investigate the calibration of Sound Source Localisation (SSL) for robots using the adaptive filter model of the cerebellum and how this could be automatically adapted for multiple acoustic environments. The role of the cerebellum has mainly been identified in the context of motor control, and only in recent years has it been recognised that it has a wider role to play in the senses and cognition. The adaptive filter model of the cerebellum has been successfully applied to a number of robotics applications but so far none involving auditory sense. Multiple models frameworks such as MOdular Selection And Identification for Control (MOSAIC) have also been developed in the context of motor control, and this has been the inspiration for adaptation of audio calibration in multiple acoustic environments; again, application of this approach in the area of auditory sense is completely new. The thesis showed that it was possible to calibrate the output of an SSL algorithm using the adaptive filter model of the cerebellum, improving the performance compared to the uncalibrated SSL. Using an adaptation of the MOSAIC framework, and specifically using responsibility estimation, a system was developed that was able to select an appropriate set of cerebellar calibration models and to combine their outputs in proportion to how well each was able to calibrate, to improve the SSL estimate in multiple acoustic contexts, including novel contexts. The thesis also developed a responsibility predictor, also part of the MOSAIC framework, and this improved the robustness of the system to abrupt changes in context which could otherwise have resulted in a large performance error. Responsibility prediction also improved robustness to missing ground truth, which could occur in challenging environments where sensory feedback of ground truth may become impaired, which has not been addressed in the MOSAIC literature, adding to the novelty of the thesis. The utility of the so-called cerebellar chip has been further demonstrated through the development of a responsibility predictor that is based on the adaptive filter model of the cerebellum, rather than the more conventional function fitting neural network used in the literature. Lastly, it was demonstrated that the multiple cerebellar calibration architecture is capable of limited self-organising from a de-novo state, with a predetermined number of models. It was also demonstrated that the responsibility predictor could learn against its model after self-organisation, and to a limited extent, during self-organisation. The thesis addresses an important question of how a robot could improve its ability to listen in multiple, challenging acoustic environments, and recommends future work to develop this ability

    Treatise on Hearing: The Temporal Auditory Imaging Theory Inspired by Optics and Communication

    Full text link
    A new theory of mammalian hearing is presented, which accounts for the auditory image in the midbrain (inferior colliculus) of objects in the acoustical environment of the listener. It is shown that the ear is a temporal imaging system that comprises three transformations of the envelope functions: cochlear group-delay dispersion, cochlear time lensing, and neural group-delay dispersion. These elements are analogous to the optical transformations in vision of diffraction between the object and the eye, spatial lensing by the lens, and second diffraction between the lens and the retina. Unlike the eye, it is established that the human auditory system is naturally defocused, so that coherent stimuli do not react to the defocus, whereas completely incoherent stimuli are impacted by it and may be blurred by design. It is argued that the auditory system can use this differential focusing to enhance or degrade the images of real-world acoustical objects that are partially coherent. The theory is founded on coherence and temporal imaging theories that were adopted from optics. In addition to the imaging transformations, the corresponding inverse-domain modulation transfer functions are derived and interpreted with consideration to the nonuniform neural sampling operation of the auditory nerve. These ideas are used to rigorously initiate the concepts of sharpness and blur in auditory imaging, auditory aberrations, and auditory depth of field. In parallel, ideas from communication theory are used to show that the organ of Corti functions as a multichannel phase-locked loop (PLL) that constitutes the point of entry for auditory phase locking and hence conserves the signal coherence. It provides an anchor for a dual coherent and noncoherent auditory detection in the auditory brain that culminates in auditory accommodation. Implications on hearing impairments are discussed as well.Comment: 603 pages, 131 figures, 13 tables, 1570 reference

    Postnatal development of the afferent innervation of the mammalian cochlea

    Get PDF
    The adult mammalian cochlea receives dual afferent innervation: the inner hair cells (IHCs) are innervated exclusively by type I spiral ganglion neurons (SGNs), whereas the outer hair cells (OHCs) are innervated by type II SGNs. We have characterized the reorganization and morphology of this dual afferent innervation pattern as it is established in the developing rat cochlea. Before the cochlear afferent innervation reaches a mature configuration, there is an initial mismatch, where both populations of SGNs innervate both types of sensory hair cells: during the first postnatal week in the rat cochlea, type I SGN innervation is eliminated from the OHC and type II SGN innervation is eliminated from the IHC. This reorganization occurs during the first two postnatal weeks just before the onset of hearing. Our data reveal distinct phases in the development of the afferent innervation of the organ of Corti: neurite refinement, with a formation of the outer spiral bundles innervating outer hair cells; neurite retraction and synaptic pruning to eliminate type I SGN innervation of OHCs, while retaining their supply to IHCs. Such a reorganization also makes the cochlea a model system for studying CNS synapse development, plasticity and elimination. The present article summarizes the recent progress in our understanding of the afferent innervation of the cochlea.Biomedical Reviews 2012; 23: 37-52

    Comparison of electrophysiological auditory measures in fishes

    Get PDF
    © Springer International Publishing Switzerland 2016. Sounds provide fishes with important information used to mediate behaviors such as predator avoidance, prey detection, and social communication. How we measure auditory capabilities in fishes, therefore, has crucial implications for interpreting how individual species use acoustic information in their natural habitat. Recent analyses have highlighted differences between behavioral and electrophysiologically determined hearing thresholds, but less is known about how physiological measures at different auditory processing levels compare within a single species. Here we provide one of the first comparisons of auditory threshold curves determined by different recording methods in a single fish species, the soniferous Hawaiian sergeant fish Abudefduf abdominalis, and review past studies on representative fish species with tuning curves determined by different methods. The Hawaiian sergeant is a colonial benthic-spawning damselfish (Pomacentridae) that produces low-frequency, low-intensity sounds associated with reproductive and agonistic behaviors. We compared saccular potentials, auditory evoked potentials (AEP), and single neuron recordings from acoustic nuclei of the hindbrain and midbrain torus semicircularis. We found that hearing thresholds were lowest at low frequencies (~75–300 Hz) for all methods, which matches the spectral components of sounds produced by this species. However, thresholds at best frequency determined via single cell recordings were ~15–25 dB lower than those measured by AEP and saccular potential techniques. While none of these physiological techniques gives us a true measure of the auditory “perceptual” abilities of a naturally behaving fish, this study highlights that different methodologies can reveal similar detectable range of frequencies for a given species, but absolute hearing sensitivity may vary considerably
    corecore