229 research outputs found
Intrinsic gain modulation and adaptive neural coding
In many cases, the computation of a neural system can be reduced to a
receptive field, or a set of linear filters, and a thresholding function, or
gain curve, which determines the firing probability; this is known as a
linear/nonlinear model. In some forms of sensory adaptation, these linear
filters and gain curve adjust very rapidly to changes in the variance of a
randomly varying driving input. An apparently similar but previously unrelated
issue is the observation of gain control by background noise in cortical
neurons: the slope of the firing rate vs current (f-I) curve changes with the
variance of background random input. Here, we show a direct correspondence
between these two observations by relating variance-dependent changes in the
gain of f-I curves to characteristics of the changing empirical
linear/nonlinear model obtained by sampling. In the case that the underlying
system is fixed, we derive relationships relating the change of the gain with
respect to both mean and variance with the receptive fields derived from
reverse correlation on a white noise stimulus. Using two conductance-based
model neurons that display distinct gain modulation properties through a simple
change in parameters, we show that coding properties of both these models
quantitatively satisfy the predicted relationships. Our results describe how
both variance-dependent gain modulation and adaptive neural computation result
from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio
A simple model for low variability in neural spike trains
Neural noise sets a limit to information transmission in sensory systems. In
several areas, the spiking response (to a repeated stimulus) has shown a higher
degree of regularity than predicted by a Poisson process. However, a simple
model to explain this low variability is still lacking. Here we introduce a new
model, with a correction to Poisson statistics, which can accurately predict
the regularity of neural spike trains in response to a repeated stimulus. The
model has only two parameters, but can reproduce the observed variability in
retinal recordings in various conditions. We show analytically why this
approximation can work. In a model of the spike emitting process where a
refractory period is assumed, we derive that our simple correction can well
approximate the spike train statistics over a broad range of firing rates. Our
model can be easily plugged to stimulus processing models, like
Linear-nonlinear model or its generalizations, to replace the Poisson spike
train hypothesis that is commonly assumed. It estimates the amount of
information transmitted much more accurately than Poisson models in retinal
recordings. Thanks to its simplicity this model has the potential to explain
low variability in other areas
The Neural Mechanisms Underlying Visual Target Search
The task of finding specific objects and switching between targets is ubiquitous in everyday life. Searching for a particular object requires our brains to activate and maintain a representation of the target (working memory), identify each encountered object (object recognition), and determine whether the currently viewed object matches the sought target (decision making). The comparison of working memory and visual information is thought to happen via feedback of target information from higher-order brain areas to the ventral visual pathway. However, what is exactly represented by these areas and how do they implement this comparison still remains unknown. To investigate these questions, we employed a combined approach involving electrophysiology experiments and computational modeling. In particular, we recorded neural responses in inferotemporal (IT) and perirhinal (PRH) cortex as monkeys performed a visual target search task, and we adopted population-based read-outs to measure the amount and format of information contained in these neural populations. In Chapter 2 we report that the total amount of target match information was matched in IT and PRH, but this information was contained in a more explicit (i.e. linearly separable) format in PRH. These results suggest that PRH implements an untangling computation to reformat its inputs from IT. Consistent with this hypothesis, a simple linear-nonlinear model was sufficient to capture the transformation between the two areas. In Chapter 3, we report that the untangling computation in PRH takes time to evolve. While this type of dynamic reformatting is normally attributed to complex recurrent circuits, here we demonstrated that this phenomenon could be accounted by the same instantaneous linear-nonlinear model presented in Chapter 2. This counterintuitive finding was due to the existence of non-stationarities in the IT neural representation. Finally, in Chapter 4 we completely describe a novel set of methods that we developed and applied in Chapters 2 and 3 to quantify the task-specific signals contained in the heterogeneous neural responses in IT and PRH, and to relate these signals to measures of task performance. Together, this body of work revealed a previously unknown untangling computation in PRH during visual search, and demonstrated that a feed-forward linear-nonlinear model is sufficient to describe this computation
Stimulus-dependent maximum entropy models of neural population codes
Neural populations encode information about their stimulus in a collective
fashion, by joint activity patterns of spiking and silence. A full account of
this mapping from stimulus to neural activity is given by the conditional
probability distribution over neural codewords given the sensory input. To be
able to infer a model for this distribution from large-scale neural recordings,
we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal
extension of the canonical linear-nonlinear model of a single neuron, to a
pairwise-coupled neural population. The model is able to capture the
single-cell response properties as well as the correlations in neural spiking
due to shared stimulus and due to effective neuron-to-neuron connections. Here
we show that in a population of 100 retinal ganglion cells in the salamander
retina responding to temporal white-noise stimuli, dependencies between cells
play an important encoding role. As a result, the SDME model gives a more
accurate account of single cell responses and in particular outperforms
uncoupled models in reproducing the distributions of codewords emitted in
response to a stimulus. We show how the SDME model, in conjunction with static
maximum entropy models of population vocabulary, can be used to estimate
information-theoretic quantities like surprise and information transmission in
a neural population.Comment: 11 pages, 7 figure
Retinal adaptation to spatial correlations
The classical center-surround retinal ganglion cell receptive field is thought to remove the strong spatial correlations in natural scenes, enabling efficient use of limited bandwidth. While early studies with drifting gratings reported robust surrounds (Enroth-Cugell and Robson, 1966), recent measurements with white noise reveal weak surrounds (Chichilnisky and Kalmar, 2002). This might be evidence for dynamical weakening of the retinal surround in response to decreased spatial correlations, which would be predicted by efficient coding theory. Such adaptation is reported in LGN (Lesica et al., 2007), but whether the retina also adapts to correlations is unknown. 

We tested for adaptation by recording simultaneously from ~40 ganglion cells on a multi-electrode array while presenting white and exponentially correlated checkerboards and strips. Measuring from ~200 cells responding to 90 minutes each of white and correlated stimuli, we were able to extract precise spatiotemporal receptive fields (STRFs). We found that a difference-of-Gaussians was not a good fit and the surround was generally displaced from the center. Thus, to assess surround strength we found the center and surround regions and the total weight on the pixels in each region. The relative surround strength was then defined as the ratio of surround weight to center weight. Surprisingly, we found that the majority of recorded cells have a stronger surround under white noise than under correlated noise (p<.05), contrary to naive expectation from theory. The conclusion was robust to different methods of extracting STRFs and persisted with checkerboard and strip stimuli.

To test, without assuming a model, whether the retina decorrelates stimuli, we also measured the pairwise correlations between spike trains of simultaneously recorded neurons under three conditions: white checkerboard, exponentially correlated noise, and scale-free noise. The typical amount of pairwise correlation increased with extent of input correlation, in line with our STRF measurements
Extending the Occupancy Grid Concept for Low-Cost Sensor Based SLAM
The simultaneous localization and mapping problem is approached by using an ultrasound sensor and wheel encoders. To be able to account for the low precision inherent in ultrasound sensors, the occupancy grid notion is extended. The extension takes into consideration with which angle the sensor is pointing, to compensate for the issue that an object is not necessarily detectable from all position due to deficiencies in how ultrasonic range sensors work. Also, a mixed linear/nonlinear model is derived for future use in Rao-Blackwellized particle smoothing
Adaptive Filtering Enhances Information Transmission in Visual Cortex
Sensory neuroscience seeks to understand how the brain encodes natural
environments. However, neural coding has largely been studied using simplified
stimuli. In order to assess whether the brain's coding strategy depend on the
stimulus ensemble, we apply a new information-theoretic method that allows
unbiased calculation of neural filters (receptive fields) from responses to
natural scenes or other complex signals with strong multipoint correlations. In
the cat primary visual cortex we compare responses to natural inputs with those
to noise inputs matched for luminance and contrast. We find that neural filters
adaptively change with the input ensemble so as to increase the information
carried by the neural response about the filtered stimulus. Adaptation affects
the spatial frequency composition of the filter, enhancing sensitivity to
under-represented frequencies in agreement with optimal encoding arguments.
Adaptation occurs over 40 s to many minutes, longer than most previously
reported forms of adaptation.Comment: 20 pages, 11 figures, includes supplementary informatio
Recommended from our members
Forecasting financial markets using linear, nonlinear & model combination methods
In this thesis we investigate the question of asset price predictability. The two major themes that we focus on are firstly; whether machine learning and statistical modelling techniques, which impose less restrictive assumptions on asset price dynamics than do classical linear methods, can be used to forecast and trade financial markets to a degree greater than that which traditional asset pricing models would lead us to expect and secondly; to what extent model combination/ensemble strategies can add value in this pursuit. The approaches used include support vector regression (SVR), k-nearest neighbours (KNN), trading rules, linear regression (LR) and the random subspace ensemble method.
We investigate these two themes using inherently data-driven models across datasets of sufficient size to render statistically meaningful results in three self-contained contexts. The first piece of empirical work compares the relative forecasting performance of SVR, KNN and LR models when applied to predicting daily returns of 58 UK stocks in the FTSE 100 over 4000 days. Bootstrap simulations are used to shed further statistical light on model performance.
Secondly, we investigate the extent to which model combinations can improve forecasting performance with the use of the random subspace ensemble method for constructing ensembles of linear regression models to predict the returns of a portfolio of FTSE 100 stocks. The primary ensemble consists of 62500 component models estimated by randomly sampling subsets of the feature set and the final result combined via a majority vote.
Lastly, we conduct an in-depth study of the channel break-out trading rule over a portfolio of 37 futures markets. We borrow a page from the book of modern portfolio theory where it is the performance of individual markets in the context of a portfolio that is ultimately of interest rather than on an individual basis. This approach is rarely used in the literature but is able to shed more light on the question of trading rule efficacy. Bootstrap resampling is employed to derive robust performance statistics. Our results show the Sharpe Ratio of the portfolio to be three times greater than of individual markets as a result of diversification in addition to being greater than that of S&P500 benchmark.
We did not set out in an attempt to refute the weak form of Fama's (1970) classic taxonomy of information sets or, colloquially, "to beat the market"; nonetheless, some of our results suggest economically significant returns
- …