220 research outputs found
Linking Visual Cortical Development to Visual Perception
Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-1-0657
Disentangling Sub-Millisecond Processes within an Auditory Transduction Chain
Every sensation begins with the conversion of a sensory stimulus into the response of a receptor neuron. Typically, this involves a sequence of multiple biophysical processes that cannot all be monitored directly. In this work, we present an approach that is based on analyzing different stimuli that cause the same final output, here defined as the probability of the receptor neuron to fire a single action potential. Comparing such iso-response stimuli within the framework of nonlinear cascade models allows us to extract the characteristics of individual signal-processing steps with a temporal resolution much finer than the trial-to-trial variability of the measured output spike times. Applied to insect auditory receptor cells, the technique reveals the sub-millisecond dynamics of the eardrum vibration and of the electrical potential and yields a quantitative four-step cascade model. The model accounts for the tuning properties of this class of neurons and explains their high temporal resolution under natural stimulation. Owing to its simplicity and generality, the presented method is readily applicable to other nonlinear cascades and a large variety of signal-processing systems
The influence of statistical context on the neural representation of sound
Models of stimulus-response functions have been used for decades in an
attempt to understand the complex relationship between a sensory stimulus
and the neural response that it elicits. A popular model for characterising
auditory function is the spectrotemporal receptive field (STRF), originally
due to Aertsen and Johannesma (1980); Aertsen et al. (1980, 1981).
However, the STRF model predicts auditory cortical responses to complex
sounds very poorly, presumably because the model is linear in the stimulus
spectrogram and thus incapable of capturing spectrotemporal nonlinearities
in auditory responses.
Ahrens et al. (2008a) introduced a multilinear framework, which captures
neuron-specific nonlinear effects of stimulus context on spiking responses
to complex sounds. In such a framework, contextual effects are interpreted
as nonlinear stimulus interactions that modulate the input to a subsequent
STRF-like linear filter. We derive various extensions to this framework, and
demonstrate that the nonlinear effects of stimulus context are largely inseparable,
and fundamentally different for near-simultaneous and delayed
non-simultaneous sound energy. In two populations of neurons, recorded
from the mouse auditory cortex and thalamus, we show that simultaneous
sound energy provides a nonlinear positive (amplifying) gain to the
subsequent linear filter, while non-simultaneous sound energy provides a
negative (dampening) gain. We demonstrate that this structure is largely
responsible for providing a significant increase in the predictive capabilities
of the model.
Using this framework, we show that nonlinear context dependence differs
between cortical fields, consistent with previous studies (Linden et al.,
2003). Furthermore, we illustrate how such a model can be used to probe
the nonlinear mechanisms that underly the ability of the auditory system
to operate in diverse acoustic environments. These results provide a novel
extension to the study of receptive fields in multiple brain areas, and extend
existing understanding of the way in which stimulus context drives
complex auditory responses
Recommended from our members
Identification of Dendritic Processing in Spiking Neural Circuits
A large body of experimental evidence points to sophisticated signal processing taking place at the level of dendritic trees and dendritic branches of neurons. This evidence suggests that, in addition to inferring the connectivity between neurons, identifying analog dendritic processing in individual cells is fundamentally important to understanding the underlying principles of neural computation. In this thesis, we develop a novel theoretical framework for the identification of dendritic processing directly from spike times produced by spiking neurons. The problem setting of spiking neurons is necessary since such neurons make up the majority of electrically excitable cells in most nervous systems and it is often hard or even impossible to directly monitor the activity within dendrites. Thus, action potentials produced by neurons often constitute the only causal and observable correlate of dendritic processing. In order to remain true to the underlying biophysics of electrically excitable cells, we employ well-established mechanistic models of action potential generation to describe the nonlinear mapping of the aggregate current produced by the tree into an asynchronous sequence of spikes. Specific models of spike generation considered include conductance-based models such as Hodgkin-Huxley, Morris-Lecar, Fitzhugh-Nagumo, as well as simpler models of the integrate-and-fire and threshold-and-fire type. The aggregate time-varying current driving the spike generator is taken to be produced by a dendritic stimulus processor, which is a nonlinear dynamical system capable of describing arbitrary linear and nonlinear transformations performed on one or more input stimuli. In the case of multiple stimuli, it can also describe the cross-coupling, or interaction, between various stimulus features. The behavior of the dendritic stimulus processor is fully captured by one or more kernels, which provide a characterization of the signal processing that is consistent with the broader cable theory description of dendritic trees. We prove that the neural identification problem, stated in terms of identifying the kernels of the dendritic stimulus processor, is mathematically dual to the neural population encoding problem. Specifically, we show that the collection of spikes produced by a single neuron in multiple experimental trials can be treated as a single multidimensional spike train of a population of neurons encoding the parameters of the dendritic stimulus processor. Using the theory of sampling in reproducing kernel Hilbert spaces, we then derive precise results demonstrating that, during any experiment, the entire neural circuit is projected onto the space of input stimuli and parameters of this projection are faithfully encoded in the spike train. Spike times are shown to correspond to generalized samples, or measurements, of this projection in a system of coordinates that is not fixed but is both neuron- and stimulus-dependent. We examine the theoretical conditions under which it may be possible to reconstruct the dendritic stimulus processor from these samples and derive corresponding experimental conditions for the minimum number of spikes and stimuli that need to be used. We also provide explicit algorithms for reconstructing the kernel projection and demonstrate that, under natural conditions, this projection converges to the true kernel. The developed methodology is quite general and can be applied to a number of neural circuits. In particular, the methods discussed span all sensory modalities, including vision, audition and olfaction, in which external stimuli are typically continuous functions of time and space. The results can also be applied to circuits in higher brain centers that receive multi-dimensional spike trains as input stimuli instead of continuous signals. In addition, the modularity of the approach allows one to extend it to mixed-signal circuits processing both continuous and spiking stimuli, to circuits with extensive lateral connections and feedback, as well as to multisensory circuits concurrently processing multiple stimuli of different dimensions, such as audio and video. Another important extension of the approach can be used to estimate the phase response curves of a neuron. All of the theoretical results are accompanied by detailed examples demonstrating the performance of the proposed identification algorithms. We employ both synthetic and naturalistic stimuli such as natural video and audio to highlight the power of the approach. Finally, we consider the implication of our work on problems pertaining to neural encoding and decoding and discuss promising directions for future research
Bio-Inspired Computer Vision: Towards a Synergistic Approach of Artificial and Biological Vision
To appear in CVIUStudies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision
Recommended from our members
Retina-V1 model of detectability across the visual field
textA practical model is proposed for predicting the detectability of targets at arbitrary locations in the visual field, in arbitrary gray-scale backgrounds, and under photopic viewing conditions. The major factors incorporated into the model include: (i) the optical point spread function of the eye, (ii) local luminance gain control (Weber's law), (iii) the sampling array of retinal ganglion cells, (iv) orientation and spatial-frequency dependent contrast masking, (iv) broadband contrast masking, (vi) and efficient response pooling. The model is tested against previously reported threshold measurements on uniform backgrounds (the ModelFest data set and data from Foley et al. 2007), and against new measurements reported here for several ModelFest targets presented on uniform, 1/f noise, and natural backgrounds, at retinal eccentricities ranging from 0 to 10 deg. Although the model has few free parameters, it is able to account quite well for all the threshold measurements.Psycholog
Single-unit studies of visual motion processing in cat extrastriate areas
Motion vision has high survival value and is a fundamental property of all
visual systems. The old Greeks already studied motion vision, but the
physiological basis of it first came under scrutiny in the late nineteenth
century. Later, with the introduction of single-cell (single-unit) recordings
around 1950, the cellular basis of motion perception could be explored. It
became clear, that the mammalian visual brain consists of specialized
regions for processing different kinds of visual information. The existence of
specialized motion pathways from the retina through several subcortical and
cortical areas is nowadays indisputable. The primary visual area is the first
cortical stage in the mammalian brain, where direction selective neurons
have been found. However, this area is a major relay station for virtually all
other visual attributes as well. More specialized motion processing occurs
subsequently in the so-called extrastriate brain areas, of which little is known
yet.
In this thesis, I study so-called complex cells in two extrastriate areas of the
cat that are involved in motion processing, area 18 and PMLS. Complex cells
are able to couple 'corresponding' elements in subsequent images (to solve
the correspondence problem), which implies a basic role in motion detection.
In PMLS I find, that complex cells possess quite elaborate receptive field
structures, which suggests that they also play a role in the analysis of higher
order motion information. I therefore examine the basic spatial and temporal
motion processing properties of complex cells as well as their higher order
temporal interactions and compare results for the two extrastriate cortical
areas. Processing of motion information by the analyzed cells proves to occur
in parallel in PMLS and area 18 (chapter 2). Contrary to complex cells in
PMLS, those in area 18 favor non-smooth motion (chapter 3). The second
order temporal interactions differ markedly for cells in the two areas (chapter
4). In addition, area 18 complex cells prove to be velocity tuned (chapter 1),
with a sharp tuning for step-size and a broad tuning for step-delays. The
similarities and differences of cell responses in area 18 and PMLS are
discussed in detail, together with the general significance of these findings for
motion information processing in the cat
Artificial Neural Networks for Nonlinear System Identification of Neuronal Microcircuits
This thesis explores the application of artificial neural networks (ANNs) to nonlinear system identification. We use neuronal microcircuits in the retina as a testbed for our technique, which relies upon the marriage of partial anatomical information with large electrophysiological datasets. Rather than a typical application of machine learning, our primary goal is not to predict the output of retinal circuits, but rather to uncover their structure. We begin with a theoretical exploration in a toy problem and provide a proof of unique identifiability under a specific set of conditions. We then perform empirical simulations in a number of different circuit architectures and explore the space of constraints and regularizers to demonstrate that this technique is feasible in a hyperparametric regime that lends itself well to neuroscience datasets. We then apply the technique to mouse retinal datasets and show that we can both recover known biological information as well as discover new hypotheses for biological exploration. We end with an exploration of active stimulus design algorithms to distinguish between circuit hypotheses.</p
Concentration Coding in the Accessory Olfactory System
Understanding how sensory systems encode stimuli is a fundamental question of neuroscience. The role of every sensory system is to encode information about the identity and quantity of stimuli in the environment. Primary sensory neurons in the periphery are faced with the task of representing all relevant information for further processing by downstream circuits, ultimately leading to detection, classification and potential response. However, environmental variability potentially alters stimulus properties in non-relevant ways. Here, we address these problems using the mouse accessory olfactory system: AOS) as a model. The AOS is an independent olfactory system possessed by most terrestrial vertebrates, although not humans, and is specialized to detect social cues. It mediates behaviors such as reproduction, aggression, and individual identification. Non-volatile compounds found in urine, including sulfated steroids, are the main source of AOS stimuli. Vomeronasal sensory neurons: VSNs), the primary sensory neurons of the AOS, are located in the base of the nasal cavity, and they detect the identity and quantity of stimuli. However, like other sensory cues, urine is subject to environmental modulation through mechanisms such as evaporation and dilution that affect the concentrations of ligands in non-biologically relevant ways. Ideally, the AOS represents stimuli in ways that are stable across condition.
In the scope of this thesis, I explore how the AOS represents concentration at the levels of the individual neuron, the circuit and the whole animal. Using extracellular recordings of explanted tissue, we characterized how VSNs encode stimuli. VSNs fired predominantly in trains of action potentials with similar structure during spontaneous and stimulus-driven activity. Using pharmacological and genetic tools, we demonstrated that the signal transduction cascade influences the structure of both spontaneous and stimulus-driven activity. Then, we explored the representation of concentration of sulfated steroids by VSNs and the circuit mechanisms by which the AOS can represent concentration information in a manner invariant to environmental uncertainties. We identified ratio-coding as a means for stable concentration representation. The ratio of the concentrations of non-volatile ligands found in urine will not change following urine evaporation or dilution, while the individual concentrations will. This property allows for both insensitivity to changes in absolute concentration and sensitivity to changes in relative concentration. Using extracellular recording and computational modeling, we have demonstrated that VSN activity can be used to robustly encode concentration using ratios. Finally, we attempted to develop a novel behavioral assay to investigate how mice detect AOS stimuli
Characterization of response properties in the mouse lateral geniculate nucleus
The lateral geniculate nucleus (LGN) has been increasingly recognized to actively regulate information transmission to primary visual cortex (V1). Although efforts have been devoted to study its morphological and functional features, the full array of response characteristics in mouse LGN as well as their dependency on subjective state have been relatively unexplored.
To address the question we recorded from mouse LGN with multisite-electrode-arrays (MEAs). From a dataset with 185 single units, our results revealed several exceptional response features in mouse LGN. We also demonstrated that subtypes, such as ON-/OFF-centre and transient/sustained cells exhibited functionally distinctive features, which might indicate parallel projections. To further compare response features from the full extent of mouse LGN, we developed a three-dimension (3D) LGN volume through histological approach. This volume explicitly captures morphological features of mouse LGN and provides the preciseness to classify location of single neuron into the anterior/middle/posterior LGN. Based on this categorization, we showed that response features were not regionally restricted within mouse LGN.
We further examined neural activity with subjects in high or low isoflurane states. The distinct features in LFPs between the two states indicated that adjusting isoflurane concentration could provide a reliable and controllable experimental model to explore the state-dependent neural activity in mouse visual system. Subsequently, our results demonstrated that properties, including response latency, contrast sensitivity and spatial frequency properties were modulated by isoflurane concentration.
Our current work suggests that mouse LGN can dynamically regulate information transmission to the cortex using numerous mechanisms, including responding mode, modulation of neuronal responses according to subjectsā states.Open Acces
- ā¦