883 research outputs found

    Neural architecture for echo suppression during sound source localization based on spiking neural cell models

    Get PDF
    Zusammenfassung Diese Arbeit untersucht die biologischen Ursachen des psycho-akustischen Präzedenz Effektes, der Menschen in die Lage versetzt, akustische Echos während der Lokalisation von Schallquellen zu unterdrücken. Sie enthält ein Modell zur Echo-Unterdrückung während der Schallquellenlokalisation, welches in technischen Systemen zur Mensch-Maschine Interaktion eingesetzt werden kann. Die Grundlagen dieses Modells wurden aus eigenen elektrophysiologischen Experimenten an der Mongolischen Wüstenrennmaus gewonnen. Die dabei erstmalig an der Wüstenrennmaus erzielten Ergebnisse, zeigen ein besonderes Verhalten spezifischer Zellen im Dorsalen Kern des Lateral Lemniscus, einer dedizierten Region des auditorischen Hirnstammes. Die dort sichtbare Langzeithemmung scheint die Grundlage für die Echounterdrückung in höheren auditorischen Zentren zu sein. Das entwickelte Model war in der Lage dieses Verhalten nachzubilden, und legt die Vermutung nahe, dass eine starke und zeitlich präzise Hyperpolarisation der zugrundeliegende physiologische Mechanismus dieses Verhaltens ist. Die entwickelte Neuronale Modellarchitektur modelliert das Innenohr und fünf wesentliche Kerne des auditorischen Hirnstammes in ihrer Verbindungsstruktur und internen Dynamik. Sie stellt einen neuen Typus neuronaler Modellierung dar, der als Spike-Interaktionsmodell (SIM) bezeichnet wird. SIM nutzen die präzise räumlich-zeitliche Interaktion einzelner Aktionspotentiale (Spikes) für die Kodierung und Verarbeitung neuronaler Informationen. Die Basis dafür bilden Integrate-and-Fire Neuronenmodelle sowie Hebb'sche Synapsen, welche um speziell entwickelte dynamische Kernfunktionen erweitert wurden. Das Modell ist in der Lage, Zeitdifferenzen von 10 mykrosekunden zu detektieren und basiert auf den Prinzipien der zeitlichen und räumlichen Koinzidenz sowie der präzisen lokalen Inhibition. Es besteht ausschließlich aus Elementen einer eigens entwickelten Neuronalen Basisbibliothek (NBL) die speziell für die Modellierung verschiedenster Spike- Interaktionsmodelle entworfen wurde. Diese Bibliothek erweitert die kommerziell verfügbare dynamische Simulationsumgebung von MATLAB/SIMULINK um verschiedene Modelle von Neuronen und Synapsen, welche die intrinsischen dynamischen Eigenschaften von Nervenzellen nachbilden. Die Nutzung dieser Bibliothek versetzt sowohl den Ingenieur als auch den Biologen in die Lage, eigene, biologisch plausible, Modelle der neuronalen Informationsverarbeitung ohne detaillierte Programmierkenntnisse zu entwickeln. Die grafische Oberfläche ermöglicht strukturelle sowie parametrische Modifikationen und ist in der Lage, den Zeitverlauf mikroskopischer Zellpotentiale aber auch makroskopischer Spikemuster während und nach der Simulation darzustellen. Zwei grundlegende Elemente der Neuronalen Basisbibliothek wurden zur Implementierung als spezielle analog-digitale Schaltungen vorbereitet. Erste Silizium Implementierungen durch das Team des DFG Graduiertenkollegs GRK 164 konnten die Möglichkeit einer vollparallelen on line Verarbeitung von Schallsignalen nachweisen. Durch Zuhilfenahme des im GRK entwickelten automatisierten Layout Generators wird es möglich, spezielle Prozessoren zur Anwendung biologischer Verarbeitungsprinzipien in technischen Systemen zu entwickeln. Diese Prozessoren unterscheiden sich grundlegend von den klassischen von Neumann Prozessoren indem sie räumlich und zeitlich verteilte Spikemuster, anstatt sequentieller binärer Werte zur Informationsrepräsentation nutzen. Sie erweitern das digitale Kodierungsprinzip durch die Dimensionen des Raumes (2 dimensionale Nachbarschaft) der Zeit (Frequenz, Phase und Amplitude) sowie der zeitlichen Dynamik analoger Potentialverläufe. Diese Dissertation besteht aus sieben Kapiteln, welche den verschiedenen Bereichen der Computational Neuroscience gewidmet sind. Kapitel 1 beschreibt die Motivation dieser Arbeit welche aus der Absicht rühren, biologische Prinzipien der Schallverarbeitung zu erforschen und für technische Systeme während der Interaktion mit dem Menschen nutzbar zu machen. Zusätzlich werden fünf Gründe für die Nutzung von Spike-Interaktionsmodellen angeführt sowie deren neuartiger Charakter beschrieben. Kapitel 2 führt die biologischen Prinzipien der Schallquellenlokalisation und den psychoakustischen Präzedenz Effekt ein. Aktuelle Hypothesen zur Entstehung dieses Effektes werden anhand ausgewählter experimenteller Ergebnisse verschiedener Forschungsgruppen diskutiert. Kapitel 3 beschreibt die entwickelte Neuronale Basisbibliothek und führt die einzelnen neuronalen Simulationselemente ein. Es erklärt die zugrundeliegenden mathematischen Funktionen der dynamischen Komponenten und beschreibt deren generelle Einsetzbarkeit zur dynamischen Simulation spikebasierter Neuronaler Netzwerke. Kapitel 4 enthält ein speziell entworfenes Modell des auditorischen Hirnstammes beginnend mit den Filterkaskaden zur Simulation des Innenohres, sich fortsetzend über mehr als 200 Zellen und 400 Synapsen in 5 auditorischen Kernen bis zum Richtungssensor im Bereich des auditorischen Mittelhirns. Es stellt die verwendeten Strukturen und Parameter vor und enthält grundlegende Hinweise zur Nutzung der Simulationsumgebung. Kapitel 5 besteht aus drei Abschnitten, wobei der erste Abschnitt die Experimentalbedingungen und Ergebnisse der eigens durchgeführten Tierversuche beschreibt. Der zweite Abschnitt stellt die Ergebnisse von 104 Modellversuchen zur Simulationen psycho-akustischer Effekte dar, welche u.a. die Fähigkeit des Modells zur Nachbildung des Präzedenz Effektes testen. Schließlich beschreibt der letzte Abschnitt die Ergebnisse der 54 unter realen Umweltbedingungen durchgeführten Experimente. Dabei kamen Signale zur Anwendung, welche in normalen sowie besonders stark verhallten Räumen aufgezeichnet wurden. Kapitel 6 vergleicht diese Ergebnisse mit anderen biologisch motivierten und technischen Verfahren zur Echounterdrückung und Schallquellenlokalisation und führt den aktuellen Status der Hardwareimplementierung ein. Kapitel 7 enthält schließlich eine kurze Zusammenfassung und einen Ausblick auf weitere Forschungsobjekte und geplante Aktivitäten. Diese Arbeit möchte zur Entwicklung der Computational Neuroscience beitragen, indem sie versucht, in einem speziellen Anwendungsfeld die Lücke zwischen biologischen Erkenntnissen, rechentechnischen Modellen und Hardware Engineering zu schließen. Sie empfiehlt ein neues räumlich-zeitliches Paradigma der dynamischen Informationsverarbeitung zur Erschließung biologischer Prinzipien der Informationsverarbeitung für technische Anwendungen.This thesis investigates the biological background of the psycho-acoustical precedence effect, enabling humans to suppress echoes during the localization of sound sources. It provides a technically feasible and biologically plausible model for sound source localization under echoic conditions, ready to be used by technical systems during man-machine interactions. The model is based upon own electro-physiological experiments in the mongolian gerbil. The first time in gerbils obtained results reveal a special behavior of specific cells of the dorsal nucleus of the lateral lemniscus (DNLL) - a distinct region in the auditory brainstem. The explored persistent inhibition effect of these cells seems to account for the base of echo suppression at higher auditory centers. The developed model proved capable to duplicate this behavior and suggests, that a strong and timely precise hyperpolarization is the basic mechanism behind this cell behavior. The developed neural architecture models the inner ear as well as five major nuclei of the auditory brainstem in their connectivity and intrinsic dynamics. It represents a new type of neural modeling described as Spike Interaction Models (SIM). SIM use the precise spatio-temporal interaction of single spike events for coding and processing of neural information. Their basic elements are Integrate-and-Fire Neurons and Hebbian synapses, which have been extended by specially designed dynamic transfer functions. The model is capable to detect time differences as small as 10 mircrosecondes and employs the principles of coincidence detection and precise local inhibition for auditory processing. It consists exclusively of elements of a specifically designed Neural Base Library (NBL), which has been developed for multi purpose modeling of Spike Interaction Models. This library extends the commercially available dynamic simulation environment of MATLAB/SIMULINK by different models of neurons and synapses simulating the intrinsic dynamic properties of neural cells. The usage of this library enables engineers as well as biologists to design their own, biologically plausible models of neural information processing without the need for detailed programming skills. Its graphical interface provides access to structural as well as parametric changes and is capable to display the time course of microscopic cell parameters as well as macroscopic firing pattern during simulations and thereafter. Two basic elements of the Neural Base Library have been prepared for implementation by specialized mixed analog-digital circuitry. First silicon implementations were realized by the team of the DFG Graduiertenkolleg GRK 164 and proved the possibility of fully parallel on line processing of sounds. By using the automated layout processor under development in the Graduiertenkolleg, it will be possible to design specific processors in order to apply theprinciples of distributed biological information processing to technical systems. These processors differ from classical von Neumann processors by the use of spatio temporal spike pattern instead of sequential binary values. They will extend the digital coding principle by the dimensions of space (spatial neighborhood), time (frequency, phase and amplitude) as well as the dynamics of analog potentials and introduce a new type of information processing. This thesis consists of seven chapters, dedicated to the different areas of computational neuroscience. Chapter 1: provides the motivation of this study arising from the attempt to investigate the biological principles of sound processing and make them available to technical systems interacting with humans under real world conditions. Furthermore, five reasons to use spike interaction models are given and their novel characteristics are discussed. Chapter 2: introduces the biological principles of sound source localization and the precedence effect. Current hypothesis on echo suppression and the underlying principles of the precedence effect are discussed by reference to a small selection of physiological and psycho-acoustical experiments. Chapter 3: describes the developed neural base library and introduces each of the designed neural simulation elements. It also explains the developed mathematical functions of the dynamic compartments and describes their general usage for dynamic simulation of spiking neural networks. Chapter 4: introduces the developed specific model of the auditory brainstem, starting from the filtering cascade in the inner ear via more than 200 cells and 400 synapses in five auditory regions up to the directional sensor at the level of the auditory midbrain. It displays the employed parameter sets and contains basic hints for the set up and configuration of the simulation environment. Chapter 5: consists of three sections, whereas the first one describes the set up and results of the own electro-physiological experiments. The second describes the results of 104 model simulations, performed to test the models ability to duplicate psycho-acoustical effects like the precedence effect. Finally, the last section of this chapter contains the results of 54 real world experiments using natural sound signals, recorded under normal as well as highly reverberating conditions. Chapter 6: compares the achieved results to other biologically motivated and technical models for echo suppression and sound source localization and introduces the current status of silicon implementation. Chapter 7: finally provides a short summary and an outlook toward future research subjects and areas of investigation. This thesis aims to contribute to the field of computational neuroscience by bridging the gap between biological investigation, computational modeling and silicon engineering in a specific field of application. It suggests a new spatio-temporal paradigm of information processing in order to access the capabilities of biological systems for technical applications

    Neuromorphic auditory computing: towards a digital, event-based implementation of the hearing sense for robotics

    Get PDF
    In this work, it is intended to advance on the development of the neuromorphic audio processing systems in robots through the implementation of an open-source neuromorphic cochlea, event-based models of primary auditory nuclei, and their potential use for real-time robotics applications. First, the main gaps when working with neuromorphic cochleae were identified. Among them, the accessibility and usability of such sensors can be considered as a critical aspect. Silicon cochleae could not be as flexible as desired for some applications. However, FPGA-based sensors can be considered as an alternative for fast prototyping and proof-of-concept applications. Therefore, a software tool was implemented for generating open-source, user-configurable Neuromorphic Auditory Sensor models that can be deployed in any FPGA, removing the aforementioned barriers for the neuromorphic research community. Next, the biological principles of the animals' auditory system were studied with the aim of continuing the development of the Neuromorphic Auditory Sensor. More specifically, the principles of binaural hearing were deeply studied for implementing event-based models to perform real-time sound source localization tasks. Two different approaches were followed to extract inter-aural time differences from event-based auditory signals. On the one hand, a digital, event-based design of the Jeffress model was implemented. On the other hand, a novel digital implementation of the Time Difference Encoder model was designed and implemented on FPGA. Finally, three different robotic platforms were used for evaluating the performance of the proposed real-time neuromorphic audio processing architectures. An audio-guided central pattern generator was used to control a hexapod robot in real-time using spiking neural networks on SpiNNaker. Then, a sensory integration application was implemented combining sound source localization and obstacle avoidance for autonomous robots navigation. Lastly, the Neuromorphic Auditory Sensor was integrated within the iCub robotic platform, being the first time that an event-based cochlea is used in a humanoid robot. Then, the conclusions obtained are presented and new features and improvements are proposed for future works.En este trabajo se pretende avanzar en el desarrollo de los sistemas de procesamiento de audio neuromórficos en robots a través de la implementación de una cóclea neuromórfica de código abierto, modelos basados en eventos de los núcleos auditivos primarios, y su potencial uso para aplicaciones de robótica en tiempo real. En primer lugar, se identificaron los principales problemas a la hora de trabajar con cócleas neuromórficas. Entre ellos, la accesibilidad y usabilidad de dichos sensores puede considerarse un aspecto crítico. Los circuitos integrados analógicos que implementan modelos cocleares pueden no pueden ser tan flexibles como se desea para algunas aplicaciones específicas. Sin embargo, los sensores basados en FPGA pueden considerarse una alternativa para el desarrollo rápido y flexible de prototipos y aplicaciones de prueba de concepto. Por lo tanto, en este trabajo se implementó una herramienta de software para generar modelos de sensores auditivos neuromórficos de código abierto y configurables por el usuario, que pueden desplegarse en cualquier FPGA, eliminando las barreras mencionadas para la comunidad de investigación neuromórfica. A continuación, se estudiaron los principios biológicos del sistema auditivo de los animales con el objetivo de continuar con el desarrollo del Sensor Auditivo Neuromórfico (NAS). Más concretamente, se estudiaron en profundidad los principios de la audición binaural con el fin de implementar modelos basados en eventos para realizar tareas de localización de fuentes sonoras en tiempo real. Se siguieron dos enfoques diferentes para extraer las diferencias temporales interaurales de las señales auditivas basadas en eventos. Por un lado, se implementó un diseño digital basado en eventos del modelo Jeffress. Por otro lado, se diseñó una novedosa implementación digital del modelo de codificador de diferencias temporales y se implementó en FPGA. Por último, se utilizaron tres plataformas robóticas diferentes para evaluar el rendimiento de las arquitecturas de procesamiento de audio neuromórfico en tiempo real propuestas. Se utilizó un generador central de patrones guiado por audio para controlar un robot hexápodo en tiempo real utilizando redes neuronales pulsantes en SpiNNaker. A continuación, se implementó una aplicación de integración sensorial que combina la localización de fuentes de sonido y la evitación de obstáculos para la navegación de robots autónomos. Por último, se integró el Sensor Auditivo Neuromórfico dentro de la plataforma robótica iCub, siendo la primera vez que se utiliza una cóclea basada en eventos en un robot humanoide. Por último, en este trabajo se presentan las conclusiones obtenidas y se proponen nuevas funcionalidades y mejoras para futuros trabajos

    Neuromorphic audio processing through real-time embedded spiking neural networks.

    Get PDF
    In this work novel speech recognition and audio processing systems based on a spiking artificial cochlea and neural networks are proposed and implemented. First, the biological behavior of the animal’s auditory system is analyzed and studied, along with the classical mechanisms of audio signal processing for sound classification, including Deep Learning techniques. Based on these studies, novel audio processing and automatic audio signal recognition systems are proposed, using a bio-inspired auditory sensor as input. A desktop software tool called NAVIS (Neuromorphic Auditory VIsualizer) for post-processing the information obtained from spiking cochleae was implemented, allowing to analyze these data for further research. Next, using a 4-chip SpiNNaker hardware platform and Spiking Neural Networks, a system is proposed for classifying different time-independent audio signals, making use of a Neuromorphic Auditory Sensor and frequency studies obtained with NAVIS. To prove the robustness and analyze the limitations of the system, the input audios were disturbed, simulating extreme noisy environments. Deep Learning mechanisms, particularly Convolutional Neural Networks, are trained and used to differentiate between healthy persons and pathological patients by detecting murmurs from heart recordings after integrating the spike information from the signals using a neuromorphic auditory sensor. Finally, a similar approach is used to train Spiking Convolutional Neural Networks for speech recognition tasks. A novel SCNN architecture for timedependent signals classification is proposed, using a buffered layer that adapts the information from a real-time input domain to a static domain. The system was deployed on a 48-chip SpiNNaker platform. Finally, the performance and efficiency of these systems were evaluated, obtaining conclusions and proposing improvements for future works.Premio Extraordinario de Doctorado U

    Development of the huggable social robot Probo: on the conceptual design and software architecture

    Get PDF
    This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children

    A white paper: NASA virtual environment research, applications, and technology

    Get PDF
    Research support for Virtual Environment technology development has been a part of NASA's human factors research program since 1985. Under the auspices of the Office of Aeronautics and Space Technology (OAST), initial funding was provided to the Aerospace Human Factors Research Division, Ames Research Center, which resulted in the origination of this technology. Since 1985, other Centers have begun using and developing this technology. At each research and space flight center, NASA missions have been major drivers of the technology. This White Paper was the joint effort of all the Centers which have been involved in the development of technology and its applications to their unique missions. Appendix A is the list of those who have worked to prepare the document, directed by Dr. Cynthia H. Null, Ames Research Center, and Dr. James P. Jenkins, NASA Headquarters. This White Paper describes the technology and its applications in NASA Centers (Chapters 1, 2 and 3), the potential roles it can take in NASA (Chapters 4 and 5), and a roadmap of the next 5 years (FY 1994-1998). The audience for this White Paper consists of managers, engineers, scientists and the general public with an interest in Virtual Environment technology. Those who read the paper will determine whether this roadmap, or others, are to be followed

    Neurocomputing systems for auditory processing

    Get PDF
    This thesis studies neural computation models and neuromorphic implementations of the auditory pathway with applications to cochlear implants and artificial auditory sensory and processing systems. Very low power analogue computation is addressed through the design of micropower analogue building blocks and an auditory preprocessing module targeted at cochlear implants. The analogue building blocks have been fabricated and tested in a standard Complementary Metal Oxide Silicon (CMOS) process. The auditory pre-processing module design is based on the cochlea signal processing mechanisms and low power microelectronic design methodologies. Compared to existing preprocessing techniques used in cochlear implants, the proposed design has a wider dynamic range and lower power consumption. Furthermore, it provides the phase coding as well as the place coding information that are necessary for enhanced functionality in future cochlear implants. The thesis presents neural computation based approaches to a number of signal-processing problems encountered in cochlear implants. Techniques that can improve the performance of existing devices are also presented. Neural network based models for loudness mapping and pattern recognition based channel selection strategies are described. Compared with state—of—the—art commercial cochlear implants, the thesis results show that the proposed channel selection model produces superior speech sound qualities; and the proposed loudness mapping model consumes substantially smaller amounts of memory. Aside from the applications in cochlear implants, this thesis describes a biologically plausible computational model of the auditory pathways to the superior colliculus based on current neurophysiological findings. The model encapsulates interaural time difference, interaural spectral difference, monaural pathway and auditory space map tuning in the inferior colliculus. A biologically plausible Hebbian-like learning rule is proposed for auditory space neural map tuning, and a reinforcement learning method is used for map alignment with other sensory space maps through activity independent cues. The validity of the proposed auditory pathway model has been verified by simulation using synthetic data. Further, a complete biologically inspired auditory simulation system is implemented in software. The system incorporates models of the external ear, the cochlea, as well as the proposed auditory pathway model. The proposed implementation can mimic the biological auditory sensory system to generate an auditory space map from 3—D sounds. A large amount of real 3-D sound signals including broadband White noise, click noise and speech are used in the simulation experiments. The efiect of the auditory space map developmental plasticity is examined by simulating early auditory space map formation and auditory space map alignment with a distorted visual sensory map. Detailed simulation methods, procedures and results are presented

    Experimental study of aural discrimination between speech and non-speech

    Get PDF
    Imperial Users onl

    Towards understanding the role of central processing in release from masking

    Get PDF
    People with normal hearing have the ability to listen to a desired target sound while filtering out unwanted sounds in the background. However, most patients with hearing impairment struggle in noisy environments, a perceptual deficit which current hearing aids and cochlear implants cannot resolve. Even though peripheral dysfunction of the ears undoubtedly contribute to this deficit, surmounting evidence has implicated central processing in the inability to detect sounds in background noise. Therefore, it is essential to better understand the underlying neural mechanisms by which target sounds are dissociated from competing maskers. This research focuses on two phenomena that help suppress background sounds: 1) dip-listening, and 2) directional hearing. When background noise fluctuates slowly over time, both humans and animals can listen in the dips of the noise envelope to detect target sound, a phenomenon referred to as dip-listening. Detection of target sound is facilitated by a central neuronal mechanism called envelope locking suppression. At both positive and negative signal-to-noise ratios (SNRs), the presence of target energy can suppress the strength by which neurons in auditory cortex track background sound, at least in anesthetized animals. However, in humans and animals, most of the perceptual advantage gained by listening in the dips of fluctuating noise emerges when a target is softer than the background sound. This raises the possibility that SNR shapes the reliance on different processing strategies, a hypothesis tested here in awake behaving animals. Neural activity of Mongolian gerbils is measured by chronic implantation of silicon probes in the core auditory cortex. Using appetitive conditioning, gerbils detect target tones in the presence of temporally fluctuating amplitude-modulated background noise, called masker. Using rate- vs. timing-based decoding strategies, analysis of single-unit activity show that both mechanisms can be used for detecting tones at positive SNR. However, only temporal decoding provides an SNR-invariant readout strategy that is viable at both positive and negative SNRs. In addition to dip-listening, spatial cues can facilitate the dissociation of target sounds from background noise. Specifically, an important cue for computing sound direction is the time difference in arrival of acoustic energy reaching each ear, called interaural time difference (ITD). ITDs allow localization of low frequency sounds from left to right inside the listener\u27s head, also called sound lateralization. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here, two prevalent theories of sound localization are observed to make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. In this research, through behavioral experiments on sound lateralization, the computation of sound location with ITDs is tested. Four groups of normally hearing listeners lateralize sounds based on ITDs as a function of sound intensity, exposure hemisphere, and stimulus history. Stimuli consists of low-frequency band-limited white noise. Statistical analysis, which partial out overall differences between listeners, is inconsistent with the place-coding scheme of sound localization, and supports the hypothesis that human sound localization is instead encoded through a population rate-code
    corecore