12 research outputs found
Functional identification of an antennal lobe DM4 projection neuron of the fruit fly
A rich set of genetic tools and extensive anatomical data make the olfactory system of the fruit fly a neural circuit of choice for studying function in sensory systems. Though a substantial amount of work has been published on the neural coding of olfactory sensory neurons (OSNs) of the fruit fly, yet little is known how projection neurons (PNs) encode time-varying odor stimuli. Here we address this question with in vivo experiments coupled with a phenomenological characterization of the spiking activity of PNs. Recently, a new class of identification algorithms called Channel Identification Machines (CIMs) was proposed for identifying dendritic processing in simple neural circuits using conditional phase response curves (cPRCs). By combining cPRCs with the reduced project-integrated-and-fire neuron (PIF) model, the CIM algorithms identify a complete phenomenological description of spike generation of a biological neuron for weak to moderately strong stimuli. Moreover, the identification method employed does not require white noise stimuli nor an infinitesimal pulse injection protocol as widely used in the past. Here we identify the PNs both in silico and in vivo. Starting with simulations, we investigate the feasibility of the CIM method on PNs modeled as pseudo uni-polar neurons in silico, as shown in Figures 1.(B) and 1.(C). We then systematically convert the CIM method into a step-by-step experimental protocol, and carry it out in vivo by injecting currents into PNs using the patch clamping technique
Recommended from our members
Identification of Dendritic Processing in Spiking Neural Circuits
A large body of experimental evidence points to sophisticated signal processing taking place at the level of dendritic trees and dendritic branches of neurons. This evidence suggests that, in addition to inferring the connectivity between neurons, identifying analog dendritic processing in individual cells is fundamentally important to understanding the underlying principles of neural computation. In this thesis, we develop a novel theoretical framework for the identification of dendritic processing directly from spike times produced by spiking neurons. The problem setting of spiking neurons is necessary since such neurons make up the majority of electrically excitable cells in most nervous systems and it is often hard or even impossible to directly monitor the activity within dendrites. Thus, action potentials produced by neurons often constitute the only causal and observable correlate of dendritic processing. In order to remain true to the underlying biophysics of electrically excitable cells, we employ well-established mechanistic models of action potential generation to describe the nonlinear mapping of the aggregate current produced by the tree into an asynchronous sequence of spikes. Specific models of spike generation considered include conductance-based models such as Hodgkin-Huxley, Morris-Lecar, Fitzhugh-Nagumo, as well as simpler models of the integrate-and-fire and threshold-and-fire type. The aggregate time-varying current driving the spike generator is taken to be produced by a dendritic stimulus processor, which is a nonlinear dynamical system capable of describing arbitrary linear and nonlinear transformations performed on one or more input stimuli. In the case of multiple stimuli, it can also describe the cross-coupling, or interaction, between various stimulus features. The behavior of the dendritic stimulus processor is fully captured by one or more kernels, which provide a characterization of the signal processing that is consistent with the broader cable theory description of dendritic trees. We prove that the neural identification problem, stated in terms of identifying the kernels of the dendritic stimulus processor, is mathematically dual to the neural population encoding problem. Specifically, we show that the collection of spikes produced by a single neuron in multiple experimental trials can be treated as a single multidimensional spike train of a population of neurons encoding the parameters of the dendritic stimulus processor. Using the theory of sampling in reproducing kernel Hilbert spaces, we then derive precise results demonstrating that, during any experiment, the entire neural circuit is projected onto the space of input stimuli and parameters of this projection are faithfully encoded in the spike train. Spike times are shown to correspond to generalized samples, or measurements, of this projection in a system of coordinates that is not fixed but is both neuron- and stimulus-dependent. We examine the theoretical conditions under which it may be possible to reconstruct the dendritic stimulus processor from these samples and derive corresponding experimental conditions for the minimum number of spikes and stimuli that need to be used. We also provide explicit algorithms for reconstructing the kernel projection and demonstrate that, under natural conditions, this projection converges to the true kernel. The developed methodology is quite general and can be applied to a number of neural circuits. In particular, the methods discussed span all sensory modalities, including vision, audition and olfaction, in which external stimuli are typically continuous functions of time and space. The results can also be applied to circuits in higher brain centers that receive multi-dimensional spike trains as input stimuli instead of continuous signals. In addition, the modularity of the approach allows one to extend it to mixed-signal circuits processing both continuous and spiking stimuli, to circuits with extensive lateral connections and feedback, as well as to multisensory circuits concurrently processing multiple stimuli of different dimensions, such as audio and video. Another important extension of the approach can be used to estimate the phase response curves of a neuron. All of the theoretical results are accompanied by detailed examples demonstrating the performance of the proposed identification algorithms. We employ both synthetic and naturalistic stimuli such as natural video and audio to highlight the power of the approach. Finally, we consider the implication of our work on problems pertaining to neural encoding and decoding and discuss promising directions for future research
Recommended from our members
Neurokernel: An Open Source Platform for Emulating the Fruit Fly Brain
We have developed an open software platform called Neurokernel for collaborative development of comprehensive models of the brain of the fruit fly Drosophila melanogaster and their execution and testing on multiple Graphics Processing Units (GPUs). Neurokernel provides a programming model that capitalizes upon the structural organization of the fly brain into a fixed number of functional modules to distinguish between these modules’ local information processing capabilities and the connectivity patterns that link them. By defining mandatory communication interfaces that specify how data is transmitted between models of each of these modules regardless of their internal design, Neurokernel explicitly enables multiple researchers to collaboratively model the fruit fly’s entire brain by integration of their independently developed models of its constituent processing units. We demonstrate the power of Neurokernel’s model integration by combining independently developed models of the retina and lamina neuropils in the fly’s visual system and by demonstrating their neuroinformation processing capability. We also illustrate Neurokernel’s ability to take advantage of direct GPU-to-GPU data transfers with benchmarks that demonstrate scaling of Neurokernel’s communication performance both over the number of interface ports exposed by an emulation’s constituent modules and the total number of modules comprised by an emulation
Recommended from our members
Massively Parallel Spiking Neural Circuits: Encoding, Decoding and Functional Identification
This thesis presents a class of massively parallel spiking neural circuit architectures in which neurons are modeled by dendritic stimulus processors cascaded with spike generators. We investigate how visual stimuli can be represented by the spike times generated by the massively parallel neural circuits, how the spike times can be used to reconstruct and process visual stimuli, and the conditions when visual stimuli can be faithfully represented/reconstructed. Functional identification of the massively parallel neural circuits from spike times and its evaluation are also investigated. Together, this thesis offers a comprehensive analytic framework of massively parallel spiking neural circuit architectures arising in the study of early visual systems.
In encoding, modeling of visual stimuli in reproducing kernel Hilbert spaces is presented, recognizing the importance of studying visual encoding in a rigorous mathematical framework. For massively parallel neural circuits with biophysical spike generators, I/O characterization of the biophysical spike generators becomes possible by introducing phase response curve manifolds for the biophysical spike generators. I/O characterization of the entire neural circuit can then be interpreted as generalized sampling in the Hilbert space. Multi-component dendritic stimulus processors are introduced to model visual encoding in stereoscopic color vision. It is also shown that encoding of visual stimuli by an ensemble of complex cells has the complexity of Volterra dendritic stimulus processors.
Based on the I/O characterization, reconstruction algorithms are derived to decode, from spike times, visual stimuli encoded by these massively parallel neural circuits. Decoding problems are first formulated as spline interpolation problems. Conditions on faithful reconstruction are presented, allowing the probe of information content carried by the spikes. Algorithms are developed to qualify the decoding in massively parallel settings. For stereoscopic color visual stimuli, demixing of individual channels from an unlabeled set of spike trains is demonstrated. For encoding with complex cells, decoding problems are formulated as rank minimization problems. It is shown that the decoding algorithm does not suffer from the curse of dimensionality and thereby allows for a visual representation using biologically realistic neural resources.
The study of visual stimuli encoding and decoding enables the functional identification of massively parallel neural circuits. The duality between decoding and functional identification suggests that algorithms for functional identification of the projection of dendritic stimulus processors onto the space of input stimuli can be formulated similarly to the decoding algorithms. Functional identification of dendritic stimulus processors of neurons carrying stereoscopic color information as well as that of energy processing in complex cells is demonstrated. Furthermore, this duality also inspires a novel method to evaluate the quality of functional identification of massively parallel spiking neural circuits. By reconstructing novel stimuli using identified circuit parameters, the evaluation of the entire identified circuit is reduced to intuitive comparisons in stimulus space.
The use of biophysical spike generators advances a methodology in the study of intrinsic noise sources in neurons and their effects on stimulus representation and on precision of functional identification. These effects are investigated using a class of nonlinear neural circuits consisting of both feedforward and feedback Volterra dendritic stimulus processors and biophysical spike generators. It is shown that encoding with neural circuits with intrinsic noise sources can be interpreted as generalized sampling with noisy measurements. Effects of noise on decoding and functional identification are derived theoretically and were systematically investigated by extensive simulations.
Finally, the massively parallel neural circuit architectures are shown to enable the implementation of identity preserving transformations in the spike domain using a switching matrix regulating the connection between encoding and decoding. Two realizations of the architectures are developed, and extensive examples using continuous visual streams are provided. Implications of this result on the problem of invariant object recognition in the spike domain are discussed
Recommended from our members
Sparse identification of contrast gain control in the fruit fly photoreceptor and amacrine cell layer
The fruit fly’s natural visual environment is often characterized by light intensities ranging across several orders of magnitude and by rapidly varying contrast across space and time. Fruit fly photoreceptors robustly transduce and, in conjunction with amacrine cells, process visual scenes and provide the resulting signal to downstream targets. Here, we model the first step of visual processing in the photoreceptor-amacrine cell layer. We propose a novel divisive normalization processor (DNP) for modeling the computation taking place in the photoreceptor-amacrine cell layer. The DNP explicitly models the photoreceptor feedforward and temporal feedback processing paths and the spatio-temporal feedback path of the amacrine cells. We then formally characterize the contrast gain control of the DNP and provide sparse identification algorithms that can efficiently identify each the feedforward and feedback DNP components. The algorithms presented here are the first demonstration of tractable and robust identification of the components of a divisive normalization processor. The sparse identification algorithms can be readily employed in experimental settings, and their effectiveness is demonstrated with several examples
Reconstruction, identification and implementation methods for spiking neural circuits
Integrate-and-fire (IF) neurons are time encoding machines (TEMs) that convert the amplitude of an analog signal into a non-uniform, strictly increasing sequence of spike times.
This thesis addresses three major issues in the field of computational neuroscience as well as neuromorphic engineering.
The first problem is concerned with the formulation of the encoding performed by an IF neuron. The encoding mechanism is described mathematically by the t-transform equation,
whose standard formulation is given by the projection of the stimulus onto a set of input dependent frame functions. As a consequence, the standard methods reconstruct the input
of an IF neuron in a space spanned by a set of functions that depend on the stimulus. The process becomes computationally demanding when performing reconstruction from long sequences of spike times.
The issue is addressed in this work by developing a new framework in which the IF encoding process is formulated as a problem of uniform sampling on a set of input independent
time points. Based on this formulation, new algorithms are introduced for reconstructing the input of an IF neuron belonging to bandlimited as well as shift-invariant spaces. The algorithms are significantly faster, whilst providing a similar level of accuracy, compared to the standard reconstruction methods.
Another important issue calls for inferring mathematical models for sensory processing systems directly from input-output observations. This problem was addressed before by
performing identification of sensory circuits consisting of linear filters in series with ideal IF neurons, by reformulating the identification problem as one of stimulus reconstruction. The result was extended to circuits in which the ideal IF neuron was replaced by more
biophysically realistic models, under the additional assumptions that the spiking neuron parameters are known a priori, or that input-output measurements of the spiking neuron are available.
This thesis develops two new identification methodologies for [Nonlinear Filter]-[Ideal IF] and [Linear Filter]-[Leaky IF] circuits consisting of two steps: the estimation of the spiking neuron parameters and the identification of the filter. The methodologies are based on the reformulation of the circuit as a scaled filter in series with a modified spiking neuron.
The first methodology identifies an unknown [Nonlinear Filter]-[Ideal IF] circuit from input-output data. The scaled nonlinear filter is estimated using the NARMAX identification methodology for the reconstructed filter output.
The [Linear Filter]-[Leaky IF] circuit is identified with the second proposed methodology by first estimating the leaky IF parameters with arbitrary precision using specific
stimuli sequences. The filter is subsequently identified using the NARMAX identification methodology.
The third problem addressed in this work is given by the need of developing neuromorphic engineering circuits that perform mathematical computations in the spike domain.
In this respect, this thesis developed a new representation between the time encoded input and output of a linear filter, where the TEM is represented by an ideal IF neuron. A new practical algorithm is developed based on this representation. The proposed algorithm is significantly faster than the alternative approach, which involves reconstructing the input, simulating the linear filter, and subsequently encoding the resulting output into a spike train
Recommended from our members
An Open Pipeline for Generating Executable Neural Circuits from Fruit Fly Brain Data
Despite considerable progress in mapping the fly’s connectome and elucidating the patterns of information flow in its brain, the complexity of the fly brain’s structure and the still-incomplete state of knowledge regarding its neural circuitry pose significant challenges beyond satisfying the computational resource requirements of current fly brain models that must be addressed to successfully reverse the information processing capabilities of the fly brain. These include the need to explicitly facilitate collaborative development of brain models by combining the efforts of multiple researchers, and the need to enable programmatic generation of brain models that effectively utilize the burgeoning amount of increasingly detailed publicly available fly connectome data.
This thesis presents an open pipeline for modular construction of executable models of the fruit fly brain from incomplete biological brain data that addresses both of the above requirements. This pipeline consists of two major open-source components respectively called Neurokernel and NeuroArch.
Neurokernel is a framework for collaborative construction of executable connectome-based fly brain models by integration of independently developed models of different functional units in the brain into a single emulation that can be executed upon multiple Graphics Processing Units (GPUs). Neurokernel enforces a programming model that enables functional unit models that comply with its interface requirements to communicate during execution regardless of their internal design. We demonstrate the power of this programming model by using it to integrate independently developed models of the fly retina and lamina into a single vision processing system. We also show how Neurokernel’s communication performance can scale over multiple GPUs, number of functional units in a brain emulation, and over the number of communication ports exposed by a functional unit model.
Although the increasing amount of experimentally obtained biological data regarding the fruit fly brain affords brain modelers a potentially valuable resource for model development, the actual use of this data to construct executable neural circuit models is currently challenging because of the disparate nature of different data sources, the range of storage formats they use, and the limited query features of those formats complicates the process of inferring executable circuit designs from biological data. To overcome these limitations, we created a software package called NeuroArch that defines a data model for concurrent representation of both biological data and model structure and the relationships between them within a single graph database. Coupled with a powerful interface for querying both types of data within the database in a uniform high-level manner, this representation enables construction and dispatching of executable neural circuits to Neurokernel for execution and evaluation.
We demonstrate the utility of the NeuroArch/Neurokernel pipeline by using the packages to generate an executable model of the central complex of the fruit fly brain from both published and hypothetical data regarding overlapping neuron arborizations in different regions of the central complex neuropils. We also show how the pipeline empowers circuit model designers to devise computational analogues to biological experiments such as parallel concurrent recording from multiple neurons and emulation of genetic mutations that alter the fly’s neural circuitry
Recommended from our members
29th Annual Computational Neuroscience Meeting: CNS*2020
Meeting abstracts
This publication was funded by OCNS. The Supplement Editors declare that they have no competing interests.
Virtual | 18-22 July 202
Channel Identification Machines
We present a formal methodology for identifying a channel in a system consisting of a communication channel in cascade with an asynchronous sampler. The channel is modeled as a multidimensional filter, while models of asynchronous samplers are taken from neuroscience and communications and include integrate-and-fire neurons, asynchronous sigma/delta modulators and general oscillators in cascade with zero-crossing detectors. We devise channel identification algorithms that recover a projection of the filter(s) onto a space of input signals loss-free for both scalar and vector-valued test signals. The test signals are modeled as elements of a reproducing kernel Hilbert space (RKHS) with a Dirichlet kernel. Under appropriate limiting conditions on the bandwidth and the order of the test signal space, the filter projection converges to the impulse response of the filter. We show that our results hold for a wide class of RKHSs, including the space of finite-energy bandlimited signals. We also extend our channel identification results to noisy circuits