5 research outputs found

    Modeling Pitch Perception With an Active Auditory Model Extended by Octopus Cells

    Get PDF
    Pitch is an essential category for musical sensations. Models of pitch perception are vividly discussed up to date. Most of them rely on definitions of mathematical methods in the spectral or temporal domain. Our proposed pitch perception model is composed of an active auditory model extended by octopus cells. The active auditory model is the same as used in the Stimulation based on Auditory Modeling (SAM), a successful cochlear implant sound processing strategy extended here by modeling the functional behavior of the octopus cells in the ventral cochlear nucleus and by modeling their connections to the auditory nerve fibers (ANFs). The neurophysiological parameterization of the extended model is fully described in the time domain. The model is based on latency-phase en- and decoding as octopus cells are latency-phase rectifiers in their local receptive fields. Pitch is ubiquitously represented by cascaded firing sweeps of octopus cells. Based on the firing patterns of octopus cells, inter-spike interval histograms can be aggregated, in which the place of the global maximum is assumed to encode the pitch

    Modeling and MEG evidence of early consonance processing in auditory cortex

    Get PDF
    Pitch is a fundamental attribute of auditory perception. The interaction of concurrentpitches gives rise to a sensation that can be characterized by its degree of consonance ordissonance. In this work, we propose that human auditory cortex (AC) processes pitchand consonance through a common neural network mechanism operating at earlycortical levels. First, we developed a new model of neural ensembles incorporatingrealistic neuronal and synaptic parameters to assess pitch processing mechanisms atearly stages of AC. Next, we designed a magnetoencephalography (MEG) experiment tomeasure the neuromagnetic activity evoked by dyads with varying degrees ofconsonance or dissonance. MEG results show that dissonant dyads evoke a pitch onsetFebruary 15, 20191/44 response (POR) with a latency up to 36 ms longer than consonant dyads. Additionally,we used the model to predict the processing time of concurrent pitches; here, consonantpitch combinations were decoded faster than dissonant combinations, in line with theexperimental observations. Specifically, we found a striking match between thepredicted and the observed latency of the POR as elicited by the dyads. These novelresults suggest that consonance processing starts early in human auditory cortex andmay share the network mechanisms that are responsible for (single) pitch processing

    Neural coding of pitch cues in the auditory midbrain of unanesthetized rabbits

    Full text link
    Pitch is an important attribute of auditory perception that conveys key features in music, speech, and helps listeners extract useful information from complex auditory environments. Although the psychophysics of pitch perception has been extensively studied for over a century, the underlying neural mechanisms are still poorly understood. This thesis examines pitch cues in the inferior colliculus (IC), which is the core processing center in the mammalian auditory midbrain that relays and transforms convergent inputs from peripheral brainstem nuclei to the auditory cortex. Previous studies have shown that IC can encode low-frequency fluctuations in stimulus envelope that are related to pitch, but most experiments were conducted in anesthetized animals using stimuli that only evoked weak pitch sensations and only investigated a limited frequency range. Here, we used single-neuron recordings from the IC in normal hearing, unanesthetized rabbits in response to a comprehensive set of complex auditory stimuli to explore the role of IC in the neural processing of pitch. We characterized three neural codes for pitch cues: a temporal code for the stimulus envelope repetition rate (ERR) below 900 Hz, a rate code for ERR between 60 and 1600 Hz, and a rate-place code for frequency components individually resolved by the cochlea that is mainly available above 800 Hz. While the temporal code and the rate-place code are inherited from the auditory periphery, the rate code for ERR has not been currently characterized in processing stages prior to the IC. To help interpret our experimental findings, we used computational models to show that the IC rate code for ERR likely arises via temporal interaction of multiple synaptic inputs, and thus the IC performs a temporal-to-rate code transformation from peripheral to cortical representations of pitch cues. We also show that the IC rate-place code is robust across a 40 dB range of sound levels, and is likely strengthened by inhibitory synaptic inputs. Together, these three codes could provide neural substrates for pitch of stimuli with various temporal and spectral compositions over the entire frequency range

    BIOLOGICALLY-INFORMED COMPUTATIONAL MODELS OF HARMONIC SOUND DETECTION AND IDENTIFICATION

    Get PDF
    Harmonic sounds or harmonic components of sounds are often fused into a single percept by the auditory system. Although the exact neural mechanisms for harmonic sensitivity remain unclear, it arises presumably in the auditory cortex because subcortical neurons typically prefer only a single frequency. Pitch sensitive units and harmonic template units found in awake marmoset auditory cortex are sensitive to temporal and spectral periodicity, respectively. This thesis is a study of possible computational mechanisms underlying cortical harmonic selectivity. To examine whether harmonic selectivity is related to statistical regularities of natural sounds, simulated auditory nerve responses to natural sounds were used in principal component analysis in comparison with independent component analysis, which yielded harmonic-sensitive model units with similar population distribution as real cortical neurons in terms of harmonic selectivity metrics. This result suggests that the variability of cortical harmonic selectivity may provide an efficient population representation of natural sounds. Several network models of spectral selectivity mechanisms are investigated. As a side study, adding synaptic depletion to an integrate-and-fire model could explain the observed modulation-sensitive units, which are related to pitch-sensitive units but cannot account for precise temporal regularity. When a feed-forward network is trained to detect harmonics, the result is always a sieve, which is excited by integer multiples of the fundamental frequency and inhibited by half-integer multiples. The sieve persists over a wide variety of conditions including changing evaluation criteria, incorporating Dale’s principle, and adding a hidden layer. A recurrent network trained by Hebbian learning produces harmonic-selective by a novel dynamical mechanism that could be explained by a Lyapunov function which favors inputs that match the learned frequency correlations. These model neurons have sieve-like weights like the harmonic template units when probed by random harmonic stimuli, despite there being no sieve pattern anywhere in the network’s weights. Online stimulus design has the potential to facilitate future experiments on nonlinear sensory neurons. We accelerated the sound-from-texture algorithm to enable online adaptive experimental design to maximize the activities of sparsely responding cortical units. We calculated the optimal stimuli for harmonic-selective units and investigated model-based information-theoretic method for stimulus optimization
    corecore