633 research outputs found
Neural Distributed Autoassociative Memories: A Survey
Introduction. Neural network models of autoassociative, distributed memory
allow storage and retrieval of many items (vectors) where the number of stored
items can exceed the vector dimension (the number of neurons in the network).
This opens the possibility of a sublinear time search (in the number of stored
items) for approximate nearest neighbors among vectors of high dimension. The
purpose of this paper is to review models of autoassociative, distributed
memory that can be naturally implemented by neural networks (mainly with local
learning rules and iterative dynamics based on information locally available to
neurons). Scope. The survey is focused mainly on the networks of Hopfield,
Willshaw and Potts, that have connections between pairs of neurons and operate
on sparse binary vectors. We discuss not only autoassociative memory, but also
the generalization properties of these networks. We also consider neural
networks with higher-order connections and networks with a bipartite graph
structure for non-binary data with linear constraints. Conclusions. In
conclusion we discuss the relations to similarity search, advantages and
drawbacks of these techniques, and topics for further research. An interesting
and still not completely resolved question is whether neural autoassociative
memories can search for approximate nearest neighbors faster than other index
structures for similarity search, in particular for the case of very high
dimensional vectors.Comment: 31 page
Regularized brain reading with shrinkage and smoothing
Functional neuroimaging measures how the brain responds to complex stimuli.
However, sample sizes are modest, noise is substantial, and stimuli are high
dimensional. Hence, direct estimates are inherently imprecise and call for
regularization. We compare a suite of approaches which regularize via
shrinkage: ridge regression, the elastic net (a generalization of ridge
regression and the lasso), and a hierarchical Bayesian model based on small
area estimation (SAE). We contrast regularization with spatial smoothing and
combinations of smoothing and shrinkage. All methods are tested on functional
magnetic resonance imaging (fMRI) data from multiple subjects participating in
two different experiments related to reading, for both predicting neural
response to stimuli and decoding stimuli from responses. Interestingly, when
the regularization parameters are chosen by cross-validation independently for
every voxel, low/high regularization is chosen in voxels where the
classification accuracy is high/low, indicating that the regularization
intensity is a good tool for identification of relevant voxels for the
cognitive task. Surprisingly, all the regularization methods work about equally
well, suggesting that beating basic smoothing and shrinkage will take not only
clever methods, but also careful modeling.Comment: Published at http://dx.doi.org/10.1214/15-AOAS837 in the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Recommended from our members
Cocaine Addiction as a Homeostatic Reinforcement Learning Disorder
Drug addiction implicates both reward learning and homeostatic regulation mechanisms of the brain. This has stimulated 2 partially successful theoretical perspectives on addiction. Many important aspects of addiction, however, remain to be explained within a single, unified framework that integrates the 2 mechanisms. Building upon a recently developed homeostatic reinforcement learning theory, the authors focus on a key transition stage of addiction that is well modeled in animals, escalation of drug use, and propose a computational theory of cocaine addiction where cocaine reinforces behavior due to its rapid homeostatic corrective effect, whereas its chronic use induces slow and long-lasting changes in homeostatic setpoint. Simulations show that our new theory accounts for key behavioral and neurobiological features of addiction, most notably, escalation of cocaine use, drug-primed craving and relapse, individual differences underlying dose-response curves, and dopamine D2-receptor downregulation in addicts. The theory also generates unique predictions about cocaine self-administration behavior in rats that are confirmed by new experimental results. Viewing addiction as a homeostatic reinforcement learning disorder coherently explains many behavioral and neurobiological aspects of the transition to cocaine addiction, and suggests a new perspective toward understanding addiction
From sequences to cognitive structures : neurocomputational mechanisms
Ph. D. Thesis.Understanding how the brain forms representations of structured information distributed in time is
a challenging neuroscientific endeavour, necessitating computationally and neurobiologically
informed study. Human neuroimaging evidence demonstrates engagement of a fronto-temporal
network, including ventrolateral prefrontal cortex (vlPFC), during language comprehension.
Corresponding regions are engaged when processing dependencies between word-like items in
Artificial Grammar (AG) paradigms. However, the neurocomputations supporting dependency
processing and sequential structure-building are poorly understood. This work aimed to clarify these
processes in humans, integrating behavioural, electrophysiological and computational evidence.
I devised a novel auditory AG task to assess simultaneous learning of dependencies between adjacent
and non-adjacent items, incorporating learning aids including prosody, feedback, delineated
sequence boundaries, staged pre-exposure, and variable intervening items. Behavioural data obtained
in 50 healthy adults revealed strongly bimodal performance despite these cues. Notably, however,
reaction times revealed sensitivity to the grammar even in low performers. Behavioural and
intracranial electrode data was subsequently obtained in 12 neurosurgical patients performing this
task. Despite chance behavioural performance, time- and time-frequency domain
electrophysiological analysis revealed selective responsiveness to sequence grammaticality in regions
including vlPFC. I developed a novel neurocomputational model (VS-BIND: “Vector-symbolic
Sequencing of Binding INstantiating Dependencies”), triangulating evidence to clarify putative
mechanisms in the fronto-temporal language network. I then undertook multivariate analyses on the
AG task neural data, revealing responses compatible with the presence of ordinal codes in vlPFC,
consistent with VS-BIND. I also developed a novel method of causal analysis on multivariate
patterns, representational Granger causality, capable of detecting flow of distinct representations
within the brain. This alluded to top-down transmission of syntactic predictions during the AG task,
from vlPFC to auditory cortex, largely in the opposite direction to stimulus encodings, consistent
with predictive coding accounts. It finally suggested roles for the temporoparietal junction and
frontal operculum during grammaticality processing, congruent with prior literature.
This work provides novel insights into the neurocomputational basis of cognitive structure-building,
generating hypotheses for future study, and potentially contributing to AI and translational efforts.Wellcome
Trust, European Research Counci
- …