86 research outputs found

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 1

    Get PDF

    Treatise on Hearing: The Temporal Auditory Imaging Theory Inspired by Optics and Communication

    Full text link
    A new theory of mammalian hearing is presented, which accounts for the auditory image in the midbrain (inferior colliculus) of objects in the acoustical environment of the listener. It is shown that the ear is a temporal imaging system that comprises three transformations of the envelope functions: cochlear group-delay dispersion, cochlear time lensing, and neural group-delay dispersion. These elements are analogous to the optical transformations in vision of diffraction between the object and the eye, spatial lensing by the lens, and second diffraction between the lens and the retina. Unlike the eye, it is established that the human auditory system is naturally defocused, so that coherent stimuli do not react to the defocus, whereas completely incoherent stimuli are impacted by it and may be blurred by design. It is argued that the auditory system can use this differential focusing to enhance or degrade the images of real-world acoustical objects that are partially coherent. The theory is founded on coherence and temporal imaging theories that were adopted from optics. In addition to the imaging transformations, the corresponding inverse-domain modulation transfer functions are derived and interpreted with consideration to the nonuniform neural sampling operation of the auditory nerve. These ideas are used to rigorously initiate the concepts of sharpness and blur in auditory imaging, auditory aberrations, and auditory depth of field. In parallel, ideas from communication theory are used to show that the organ of Corti functions as a multichannel phase-locked loop (PLL) that constitutes the point of entry for auditory phase locking and hence conserves the signal coherence. It provides an anchor for a dual coherent and noncoherent auditory detection in the auditory brain that culminates in auditory accommodation. Implications on hearing impairments are discussed as well.Comment: 603 pages, 131 figures, 13 tables, 1570 reference

    Physiological mechanisms of hippocampal memory processing : experiments and applied adaptive filtering

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2008.Includes bibliographical references (p. 144-156).The hippocampus is necessary for the formation and storage of episodic memory, however, the computations within and between hippocampal subregions (CA1, CA3, and dentate gyrus) that mediate these memory processing functions are not completely understood. We investigate by recording in the hippocampal subregions as rats execute an augmented linear track task. From these recordings, we construct ensemble rate representations using a point process adaptive filter to characterize single-unit activity from each subregion. We compared the dynamics of these rate representations by computing average max rate and average rate modulation during different experimental epochs and on different segments of the track. We found that the representations in CA3 were modulated most when compared to CAl and DG during the first 5 minutes of experience. With more experience, we found the average rate modulation decreased gradually across all areas and converged to values that were not statistically different. These results suggest a specialized role for CA3 during initial context acquisition, and suggest that rate modulation becomes coherent across HPC subregions after familiarization. Information transfer between the hippocampus and neocortex is important for the consolidation of spatial and episodic memory. This process of information transfer is referred to as memory consolidation and may be mediated by a phenomena called "replay." We know that the process of replay is associated with a rise in multi-unit activity and the presence of ripples (100-250 Hz oscillations lasting from 75ms to 100ms) in CAl. Because ripples result from the same circuits as replay activity, the features of the ripple may allow us to deduce the mechanisms for replay induction and the nature of information transmitted during replay events.(cont.) Because ripples are relatively short events, analytical methods with limited temporal-spectral resolution are unable to fully characterize all the structure of ripples. In the thesis, we develop a framework for characterizing, classifying, and detecting ripples based on instantaneous frequency and instantaneous frequency modulation. The framework uses an autoregressive model for spectral-temporal analysis in combination with a Kalman filter for sample-to-sample estimates of frequency parameters. We show that the filter is flexible in the degree of smoothing as well as robust in the estimation of frequency. We demonstrate that under the proposed framework ripples can be classified based on high or low frequency, and positive or negative frequency modulation; can combine amplitude and frequency information for selective detection of ripple events; and can be used to determine the number of ripples participating in "long ripple" events.by David P. Nguyen.Ph.D

    The Analysis of Neural Heterogeneity Through Mathematical and Statistical Methods

    Get PDF
    Diversity of intrinsic neural attributes and network connections is known to exist in many areas of the brain and is thought to significantly affect neural coding. Recent theoretical and experimental work has argued that in uncoupled networks, coding is most accurate at intermediate levels of heterogeneity. I explore this phenomenon through two distinct approaches: a theoretical mathematical modeling approach and a data-driven statistical modeling approach. Through the mathematical approach, I examine firing rate heterogeneity in a feedforward network of stochastic neural oscillators utilizing a high-dimensional model. The firing rate heterogeneity stems from two sources: intrinsic (different individual cells) and network (different effects from presynaptic inputs). From a phase-reduced model, I derive asymptotic approximations of the firing rate statistics assuming weak noise and coupling. I then qualitatively validate them with high-dimensional network simulations. My analytic calculations reveal how the interaction between intrinsic and network heterogeneity results in different firing rate distributions. Turning to the statistical approach, I examine the data from in vivo recordings of neurons in the electrosensory system of weakly electric fish subject to the same realization of noisy stimuli. Using a generalized linear model (GLM) to encode stimuli into firing rate intensity, I then assess the accuracy of the Bayesian decoding of the stimulus from spike trains of various networks. For a variety of fixed network sizes and various metrics, I generally find that the optimal levels of heterogeneity are at intermediate values. Although a quadratic fit to decoding performance as a function of heterogeneity is statistically significant, the result is highly variable with low R2 values. Taken together, intermediate levels of neural heterogeneity is indeed a prominent attribute for efficient coding, but the performance is highly variable

    Bayesian Modeling and Estimation Techniques for the Analysis of Neuroimaging Data

    Get PDF
    Brain function is hallmarked by its adaptivity and robustness, arising from underlying neural activity that admits well-structured representations in the temporal, spatial, or spectral domains. While neuroimaging techniques such as Electroencephalography (EEG) and magnetoencephalography (MEG) can record rapid neural dynamics at high temporal resolutions, they face several signal processing challenges that hinder their full utilization in capturing these characteristics of neural activity. The objective of this dissertation is to devise statistical modeling and estimation methodologies that account for the dynamic and structured representations of neural activity and to demonstrate their utility in application to experimentally-recorded data. The first part of this dissertation concerns spectral analysis of neural data. In order to capture the non-stationarities involved in neural oscillations, we integrate multitaper spectral analysis and state-space modeling in a Bayesian estimation setting. We also present a multitaper spectral analysis method tailored for spike trains that captures the non-linearities involved in neuronal spiking. We apply our proposed algorithms to both EEG and spike recordings, which reveal significant gains in spectral resolution and noise reduction. In the second part, we investigate cortical encoding of speech as manifested in MEG responses. These responses are often modeled via a linear filter, referred to as the temporal response function (TRF). While the TRFs estimated from the sensor-level MEG data have been widely studied, their cortical origins are not fully understood. We define the new notion of Neuro-Current Response Functions (NCRFs) for simultaneously determining the TRFs and their cortical distribution. We develop an efficient algorithm for NCRF estimation and apply it to MEG data, which provides new insights into the cortical dynamics underlying speech processing. Finally, in the third part, we consider the inference of Granger causal (GC) influences in high-dimensional time series models with sparse coupling. We consider a canonical sparse bivariate autoregressive model and define a new statistic for inferring GC influences, which we refer to as the LASSO-based Granger Causal (LGC) statistic. We establish non-asymptotic guarantees for robust identification of GC influences via the LGC statistic. Applications to simulated and real data demonstrate the utility of the LGC statistic in robust GC identification

    Point process modeling and estimation: advances in the analysis of dynamic neural spiking data

    Full text link
    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in population spiking data. Lastly, we proposed a general three-step paradigm that allows us to relate behavioral outcomes of various tasks to simultaneously recorded neural activity across multiple brain areas, which is a step towards closed-loop therapies for psychological diseases using real-time neural stimulation. These methods are suitable for real-time implementation for content-based feedback experiments

    Computational Models of Representation and Plasticity in the Central Auditory System

    Get PDF
    The performance for automated speech processing tasks like speech recognition and speech activity detection rapidly degrades in challenging acoustic conditions. It is therefore necessary to engineer systems that extract meaningful information from sound while exhibiting invariance to background noise, different speakers, and other disruptive channel conditions. In this thesis, we take a biomimetic approach to these problems, and explore computational strategies used by the central auditory system that underlie neural information extraction from sound. In the first part of this thesis, we explore coding strategies employed by the central auditory system that yield neural responses that exhibit desirable noise robustness. We specifically demonstrate that a coding strategy based on sustained neural firings yields richly structured spectro-temporal receptive fields (STRFs) that reflect the structure and diversity of natural sounds. The emergent receptive fields are comparable to known physiological neuronal properties and can be employed as a signal processing strategy to improve noise invariance in a speech recognition task. Next, we extend the model of sound encoding based on spectro-temporal receptive fields to incorporate the cognitive effects of selective attention. We propose a framework for modeling attention-driven plasticity that induces changes to receptive fields driven by task demands. We define a discriminative cost function whose optimization and solution reflect a biologically plausible strategy for STRF adaptation that helps listeners better attend to target sounds. Importantly, the adaptation patterns predicted by the framework have a close correspondence with known neurophysiological data. We next generalize the framework to act on the spectro-temporal dynamics of task-relevant stimuli, and make predictions for tasks that have yet to be experimentally measured. We argue that our generalization represents a form of object-based attention, which helps shed light on the current debate about auditory attentional mechanisms. Finally, we show how attention-modulated STRFs form a high-fidelity representation of the attended target, and we apply our results to obtain improvements in a speech activity detection task. Overall, the results of this thesis improve our general understanding of central auditory processing, and our computational frameworks can be used to guide further studies in animal models. Furthermore, our models inspire signal processing strategies that are useful for automated speech and sound processing tasks
    corecore