507 research outputs found
A Search For Principles of Basal Ganglia Function
The basal ganglia are a group of subcortical nuclei that contain about 100
million neurons in humans. Different modes of basal ganglia dysfunction lead to
Parkinson's disease and Huntington's disease, which have debilitating motor and
cognitive symptoms. However, despite intensive study, both the internal computational
mechanisms of the basal ganglia, and their contribution to normal brain
function, have been elusive. The goal of this thesis is to identify basic principles that
underlie basal ganglia function, with a focus on signal representation, computation,
dynamics, and plasticity.
This process begins with a review of two current hypotheses of normal basal
ganglia function, one being that they automatically select actions on the basis of
past reinforcement, and the other that they compress cortical signals that tend to
occur in conjunction with reinforcement. It is argued that a wide range of experimental
data are consistent with these mechanisms operating in series, and that in
this configuration, compression makes selection practical in natural environments.
Although experimental work is outside the present scope, an experimental means
of testing this proposal in the future is suggested.
The remainder of the thesis builds on Eliasmith & Anderson's Neural Engineering
Framework (NEF), which provides an integrated theoretical account of computation,
representation, and dynamics in large neural circuits. The NEF provides
considerable insight into basal ganglia function, but its explanatory power is potentially
limited by two assumptions that the basal ganglia violate. First, like most
large-network models, the NEF assumes that neurons integrate multiple synaptic
inputs in a linear manner. However, synaptic integration in the basal ganglia is
nonlinear in several respects. Three modes of nonlinearity are examined, including
nonlinear interactions between dendritic branches, nonlinear integration within terminal
branches, and nonlinear conductance-current relationships. The first mode
is shown to affect neuron tuning. The other two modes are shown to enable alternative
computational mechanisms that facilitate learning, and make computation
more flexible, respectively.
Secondly, while the NEF assumes that the feedforward dynamics of individual
neurons are dominated by the dynamics of post-synaptic current, many basal
ganglia neurons also exhibit prominent spike-generation dynamics, including adaptation,
bursting, and hysterses. Of these, it is shown that the NEF theory of
network dynamics applies fairly directly to certain cases of firing-rate adaptation.
However, more complex dynamics, including nonlinear dynamics that are diverse
across a population, can be described using the NEF equations for representation.
In particular, a neuron's response can be characterized in terms of a more complex
function that extends over both present and past inputs. It is therefore straightforward
to apply NEF methods to interpret the effects of complex cell dynamics at
the network level.
The role of spike timing in basal ganglia function is also examined. Although
the basal ganglia have been interpreted in the past to perform computations on
the basis of mean firing rates (over windows of tens or hundreds of milliseconds)
it has recently become clear that patterns of spikes on finer timescales are also
functionally relevant. Past work has shown that precise spike times in sensory
systems contain stimulus-related information, but there has been little study of how post-synaptic neurons might use this information. It is shown that essentially any neuron can use this information to perform flexible computations, and that these
computations do not require spike timing that is very precise. As a consequence,
irregular and highly-variable firing patterns can drive behaviour with which they
have no detectable correlation.
Most of the projection neurons in the basal ganglia are inhibitory, and the effect
of one nucleus on another is classically interpreted as subtractive or divisive. Theoretically, very flexible computations can be performed within a projection if each
presynaptic neuron can both excite and inhibit its targets, but this is hardly ever
the case physiologically. However, it is shown here that equivalent computational flexibility is supported by inhibitory projections in the basal ganglia, as a simple consequence of inhibitory collaterals in the target nuclei.
Finally, the relationship between population coding and synaptic plasticity is
discussed. It is shown that Hebbian plasticity, in conjunction with lateral connections, determines both the dimension of the population code and the tuning of
neuron responses within the coded space. These results permit a straightforward
interpretation of the effects of synaptic plasticity on information processing at the
network level.
Together with the NEF, these new results provide a rich set of theoretical principles
through which the dominant physiological factors that affect basal ganglia
function can be more clearly understood
Goal-Directed Decision Making with Spiking Neurons.
UNLABELLED: Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT: Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level.This research was supported by the Swiss National Science Foundation (J.F., Grant PBBEP3 146112) and the Wellcome Trust (J.F. and M.L.).This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by the Society for Neuroscience
Reinforcement Learning
Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field
A Review of Findings from Neuroscience and Cognitive Psychology as Possible Inspiration for the Path to Artificial General Intelligence
This review aims to contribute to the quest for artificial general
intelligence by examining neuroscience and cognitive psychology methods for
potential inspiration. Despite the impressive advancements achieved by deep
learning models in various domains, they still have shortcomings in abstract
reasoning and causal understanding. Such capabilities should be ultimately
integrated into artificial intelligence systems in order to surpass data-driven
limitations and support decision making in a way more similar to human
intelligence. This work is a vertical review that attempts a wide-ranging
exploration of brain function, spanning from lower-level biological neurons,
spiking neural networks, and neuronal ensembles to higher-level concepts such
as brain anatomy, vector symbolic architectures, cognitive and categorization
models, and cognitive architectures. The hope is that these concepts may offer
insights for solutions in artificial general intelligence.Comment: 143 pages, 49 figures, 244 reference
Recommended from our members
Modeling the impact of internal state on sensory processing
Perception is the result of more than just the unbiased processing of sensory stimuli. At each moment in time, sensory inputs enter a circuit already impacted by signals of arousal, attention, and memory. This thesis aims to understand the impact of such internal states on the processing of sensory stimuli. To do so, computational models meant to replicate known biological circuitry and activity were built and analyzed. Part one aims to replicate the neural activity changes observed in auditory cortex when an animal is passively versus actively listening. In part two, the impact of selective visual attention on performance is probed in two models: a large-scale abstract model of the visual system and a smaller, more biologically-realistic one. Finally in part three, a simplified model of Hebbian learning is used to explore how task context comes to impact prefrontal cortical activity. While the models used in this thesis range in scale and represent diverse brain areas, they are all designed to capture the physical processes by which internal brain states come to impact sensory processing
Continuous Restricted Boltzmann Machines
Restricted Boltzmann machines are a generative neural network. They summarize their input data to build a probabilistic model that can then be used to reconstruct missing data or to classify new data. Unlike discrete Boltzmann machines, where the data are mapped to the space of integers or bitstrings, continuous Boltzmann machines directly use floating point numbers and therefore represent the data with higher fidelity. The primary limitation in using Boltzmann machines for big-data problems is the efficiency of the training algorithm. This paper describes an efficient deterministic algorithm for training continuous machines
Recommended from our members
Orbital Stability Analysis for Perturbed Nonlinear Systems and Natural Entrainment via Adaptive Andronov-Hopf Oscillator
Unsupervised clustering of IoT signals through feature extraction and self organizing maps
This thesis scope is to build a clustering model to inspect the structural properties of a dataset composed of IoT signals and to classify these through unsupervised clustering algorithms. To this end, a feature-based representation of the signals is used. Different feature selection algorithms are then used to obtain reduced feature spaces, so as to decrease the computational cost and the memory demand. Thus, the IoT signals are clustered using Self-Organizing Maps (SOM) and then evaluatedope
- …