91 research outputs found
Invariance of Angular Threshold Computation in a Wide-Field Looming-Sensitive Neuron
The lobula giant motion detector (LGMD) is a wide-field bilateral visual interneuron in North American locusts that acts as an angular threshold detector during the approach of a solid square along a trajectory perpendicular to the long axis of the animal (Gabbiani et al., 1999a). We investigated the dependence of this angular threshold computation on several stimulus parameters that alter the spatial and temporal activation patterns of inputs onto the dendritic tree of the LGMD, across three locust species. The same angular threshold computation was implemented by LGMD in all three species. The angular threshold computation was invariant to changes in target shape (from solid squares to solid discs) and to changes in target texture (checkerboard and concentric patterns). Finally, the angular threshold computation did not depend on object approach angle, over at least 135° in the horizontal plane. A two-dimensional model of the responses of the LGMD based on linear summation of motion-related excitatory and size-dependent inhibitory inputs successfully reproduced the experimental results for squares and discs approaching perpendicular to the long axis of the animal. Linear summation, however, was unable to account for invariance to object texture or approach angle. These results indicate that LGMD is a reliable neuron with which to study the biophysical mechanisms underlying the generation of complex but invariant visual responses by dendritic integration. They also suggest that invariance arises in part from non-linear integration of excitatory inputs within the dendritic tree of the LGMD
Human visual object categorization can be described by models with low memory capacity
Studies of high-level models of visual object categorization have left unresolved issues of neurobiological relevance, including how features are extracted from the image and the role played by memory capacity in categorization performance. We compared the ability of a comprehensive set of models to match the categorization performance of human observers while explicitly accounting for the models’ numbers of free parameters. The most successful models did not require a large memory capacity, suggesting that a sparse, abstracted representation of category properties may underlie categorization performance. This type of representation––different from classical prototype abstraction––could also be extracted directly from two-dimensional images via a biologically plausible early-vision model, rather than relying on experimenter-imposed features
Stimulus Encoding and Feature Extraction by Multiple Sensory Neurons
Neighboring cells in topographical sensory maps may transmit
similar information to the next higher level of processing. How
information transmission by groups of nearby neurons compares
with the performance of single cells is a very important
question for understanding the functioning of the nervous system.
To tackle this problem, we quantified stimulus-encoding
and feature extraction performance by pairs of simultaneously
recorded electrosensory pyramidal cells in the hindbrain of
weakly electric fish. These cells constitute the output neurons
of the first central nervous stage of electrosensory processing.
Using random amplitude modulations (RAMs) of a mimic of the
fish’s own electric field within behaviorally relevant frequency
bands, we found that pyramidal cells with overlapping receptive
fields exhibit strong stimulus-induced correlations. To quantify
the encoding of the RAM time course, we estimated the stimuli
from simultaneously recorded spike trains and found significant
improvements over single spike trains. The quality of stimulus
reconstruction, however, was still inferior to the one measured
for single primary sensory afferents. In an analysis of feature
extraction, we found that spikes of pyramidal cell pairs coinciding
within a time window of a few milliseconds performed
significantly better at detecting upstrokes and downstrokes of
the stimulus compared with isolated spikes and even spike
bursts of single cells. Coincident spikes can thus be considered
“distributed bursts.” Our results suggest that stimulus encoding
by primary sensory afferents is transformed into feature extraction
at the next processing stage. There, stimulus-induced
coincident activity can improve the extraction of behaviorally
relevant features from the stimulus
- …