220 research outputs found
The Attentional Routing Circuit: Receptive Field Modulation Through Nonlinear Dendritic Interactions
We present a model of attentional routing called the Attentional Routing Circuit (ARC) that extends an existing model of spiking neurons with dendritic nonlinearities. Specifically, we employ the Poirazi et al. (2003) pyramidal neuron in a population coding framework. ARC demonstrates that the dendritic nonlinearities can be exploited to result in selective routing, with a decrease in the number of cells needed by a factor of ~5 as compared with a linear dendrite model.

Routing of attended information occurs through the modulation of feedforward visual signals by a cortical control signal specifying the location and size of the attended target. The model is fully specified in spiking single cells. Our approach differs from past work on shifter circuits by having more efficient control, and using a more biologically detailed substrate. Our approach differs from existing models that use gain fields by providing precise hypotheses about how the control signals are generated and distributed in a hierarchical model in spiking neurons. Further, the model accounts for numerous experimental findings regarding the timing, strength and extent of attentional modulation in ventral stream areas, and the perceived contrast enhancement of attended stimuli.

To further demonstrate the plausibility of ARC, it is applied to the attention experiments of Womelsdorf et al. (2008) and tested in detail. For the simulations, the model has only two free parameters that influence its ability to match the experimental data, and without fitting, we show that it can account for the experimental observations of changes in receptive field (RF) gain and position with attention in macaques. In sum, the model provides an explanation of RF modulation as well as testable predictions about nonlinear cortical dendrites and attentional changes of receptive field properties
Python Scripting in the Nengo Simulator
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models
Recommended from our members
A neural representation of continuous space using fractional binding
We present a novel method for constructing neurally imple-mented spatial representations that we show to be useful forbuilding models of spatial cognition. This method representscontinuous (i.e., real-valued) spaces using neurons, and iden-tifies a set of operations for manipulating these representa-tions. Specifically, we use “fractional binding” to construct“spatial semantic pointers” (SSPs) that we use to generate andmanipulate representations of spatial maps encoding the posi-tions of objects. We show how these representations can betransformed to answer queries about the location and identitiesof objects, move the relative or global position of items, andanswer queries about regions of space, among other things.We demonstrate that the neural implementation in spiking net-works of SSPs have similar accuracy and capacity as the math-ematical ideal
Synchronization and Redundancy: Implications for Robustness of Neural Learning and Decision Making
Learning and decision making in the brain are key processes critical to
survival, and yet are processes implemented by non-ideal biological building
blocks which can impose significant error. We explore quantitatively how the
brain might cope with this inherent source of error by taking advantage of two
ubiquitous mechanisms, redundancy and synchronization. In particular we
consider a neural process whose goal is to learn a decision function by
implementing a nonlinear gradient dynamics. The dynamics, however, are assumed
to be corrupted by perturbations modeling the error which might be incurred due
to limitations of the biology, intrinsic neuronal noise, and imperfect
measurements. We show that error, and the associated uncertainty surrounding a
learned solution, can be controlled in large part by trading off
synchronization strength among multiple redundant neural systems against the
noise amplitude. The impact of the coupling between such redundant systems is
quantified by the spectrum of the network Laplacian, and we discuss the role of
network topology in synchronization and in reducing the effect of noise. A
range of situations in which the mechanisms we model arise in brain science are
discussed, and we draw attention to experimental evidence suggesting that
cortical circuits capable of implementing the computations of interest here can
be found on several scales. Finally, simulations comparing theoretical bounds
to the relevant empirical quantities show that the theoretical estimates we
derive can be tight.Comment: Preprint, accepted for publication in Neural Computatio
Recommended from our members
Towards a Cognitively Realistic Representation of Word Associations
The ability to associate words is an important cognitive skill.In this study we investigate different methods for representingword associations in the brain, using the Remote AssociatesTest (RAT) as a task. We explore representations derived fromfree association norms and statistical n-gram data. Althoughn-gram representations yield better performance on the test, acloser match with the human performance is obtained with rep-resentations derived from free associations. We propose thatword association strengths derived from free associations playan important role in the process of RAT solving. Furthermore,we show that this model can be implemented in spiking neu-rons, and estimate the number of biologically realistic neuronsthat would suffice for an accurate representation
How metaphysical commitments shape the study of psychological mechanisms
The study of psychological mechanisms is an interdisciplinary endeavour, requiring insights from many different domains (from electrophysiology, to psychology, to theoretical neuroscience, to computer science). In this article, I argue that philosophy plays an essential role in this interdisciplinary project, and that effective scientific study of psychological mechanisms requires that working scientists be responsible metaphysicians. This means adopting deliberate metaphysical positions when studying mechanisms that go beyond what is empirically justified regarding the nature of the phenomenon being studied, the conditions of its occurrence, and its boundaries. Such metaphysical commitments are necessary in order to set up experimental protocols, determine which variables to manipulate under experimental conditions, and which conclusions to draw from different scientific models and theories. It is important for scientists to be aware of the metaphysical commitments they adopt, since they can easily be led astray if invoked carelessly
Fine-Tuning and the Stability of Recurrent Neural Networks
A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems
- …