457 research outputs found
Hebbian covariance learning and self-tuning optimal control
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1997.Includes bibliographical references (leaves 65-70).by Daniel L. Young.S.M
Activation Learning by Local Competitions
Despite its great success, backpropagation has certain limitations that
necessitate the investigation of new learning methods. In this study, we
present a biologically plausible local learning rule that improves upon Hebb's
well-known proposal and discovers unsupervised features by local competitions
among neurons. This simple learning rule enables the creation of a forward
learning paradigm called activation learning, in which the output activation
(sum of the squared output) of the neural network estimates the likelihood of
the input patterns, or "learn more, activate more" in simpler terms. For
classification on a few small classical datasets, activation learning performs
comparably to backpropagation using a fully connected network, and outperforms
backpropagation when there are fewer training samples or unpredictable
disturbances. Additionally, the same trained network can be used for a variety
of tasks, including image generation and completion. Activation learning also
achieves state-of-the-art performance on several real-world datasets for
anomaly detection. This new learning paradigm, which has the potential to unify
supervised, unsupervised, and semi-supervised learning and is reasonably more
resistant to adversarial attacks, deserves in-depth investigation.Comment: Updated Equation (13) for the modification rule with feedback; Adding
discussions regarding activation learning for anormaly detectio
Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells
We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA laye
A Re-Examination of Hebbian-Covariance Rules and Spike Timing-Dependent Plasticity in Cat Visual Cortex in vivo
Spike timing-dependent plasticity (STDP) is considered as an ubiquitous rule for associative plasticity in cortical networks in vitro. However, limited supporting evidence for its functional role has been provided in vivo. In particular, there are very few studies demonstrating the co-occurrence of synaptic efficiency changes and alteration of sensory responses in adult cortex during Hebbian or STDP protocols. We addressed this issue by reviewing and comparing the functional effects of two types of cellular conditioning in cat visual cortex. The first one, referred to as the “covariance” protocol, obeys a generalized Hebbian framework, by imposing, for different stimuli, supervised positive and negative changes in covariance between postsynaptic and presynaptic activity rates. The second protocol, based on intracellular recordings, replicated in vivo variants of the theta-burst paradigm (TBS), proven successful in inducing long-term potentiation in vitro. Since it was shown to impose a precise correlation delay between the electrically activated thalamic input and the TBS-induced postsynaptic spike, this protocol can be seen as a probe of causal (“pre-before-post”) STDP. By choosing a thalamic region where the visual field representation was in retinotopic overlap with the intracellularly recorded cortical receptive field as the afferent site for supervised electrical stimulation, this protocol allowed to look for possible correlates between STDP and functional reorganization of the conditioned cortical receptive field. The rate-based “covariance protocol” induced significant and large amplitude changes in receptive field properties, in both kitten and adult V1 cortex. The TBS STDP-like protocol produced in the adult significant changes in the synaptic gain of the electrically activated thalamic pathway, but the statistical significance of the functional correlates was detectable mostly at the population level. Comparison of our observations with the literature leads us to re-examine the experimental status of spike timing-dependent potentiation in adult cortex. We propose the existence of a correlation-based threshold in vivo, limiting the expression of STDP-induced changes outside the critical period, and which accounts for the stability of synaptic weights during sensory cortical processing in the absence of attention or reward-gated supervision
Afferents integration and neural adaptive control of breathing
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.Cataloged from PDF version of thesis.Includes bibliographical references.The respiratory regulatory system is one of the most extensively studied homeostatic systems in the body. Despite its deceptively mundane physiological function, the mechanism underlying the robust control of the motor act of breathing in the face of constantly changing internal and external challenges throughout one's life is still poorly understood. Traditionally, control of breathing has been studied with a highly reductionist approach, with specific stimulus-response relationships being taken to reflect distinct feedback/feedforward control laws. It is assumed that the overall respiratory response could be described as the linear sum of all unitary stimulus-response relationships under a Sherringtonian framework. Such a divide-and-conquer approach has proven useful in predicting the independent effects of specific chemical and mechanical inputs. However, it has limited predictive power for the respiratory response in realistic disease states when multiple factors come into play. Instead, vast amounts of evidence have revealed the existence of complex interactions of various afferent-efferent signals in defining the overall respiratory response. This thesis aims to explore the nonlinear interaction of afferents in respiratory control. In a series of computational simulations, it was shown that the respiratory response in humans during muscular exercise under a variety of pulmonary gas exchange defects is consistent with an optimal interaction of mechanical and chemical afferents. This provides a new understanding on the impacts of pulmonary gas exchange on the adaptive control of the exercise respiratory response. Furthermore, from a series of in-vivo neurophysiology experiments in rats, it was discovered that certain respiratory neurons in the dorsolateral pons in the rat brainstem integrate central and peripheral chemoreceptor afferent signals in a hypoadditive manner. Such nonlinear interaction evidences classical (Pavlovian) conditioning of chemoreceptor inputs that modulate the respiratory rhythm and motor output. These findings demonstrate a powerful gain modulation function for control of breathing by the lower brain. The computational and experimental studies in this thesis reveal a form of associative learning important for adaptive control of respiratory regulation, at both behavioral and neuronal levels. Our results shed new light for future experimental and theoretical elucidation of the mechanism of respiratory control from an integrative modeling perspective.by Chung Tin.Ph.D
AI of Brain and Cognitive Sciences: From the Perspective of First Principles
Nowadays, we have witnessed the great success of AI in various applications,
including image classification, game playing, protein structure analysis,
language translation, and content generation. Despite these powerful
applications, there are still many tasks in our daily life that are rather
simple to humans but pose great challenges to AI. These include image and
language understanding, few-shot learning, abstract concepts, and low-energy
cost computing. Thus, learning from the brain is still a promising way that can
shed light on the development of next-generation AI. The brain is arguably the
only known intelligent machine in the universe, which is the product of
evolution for animals surviving in the natural environment. At the behavior
level, psychology and cognitive sciences have demonstrated that human and
animal brains can execute very intelligent high-level cognitive functions. At
the structure level, cognitive and computational neurosciences have unveiled
that the brain has extremely complicated but elegant network forms to support
its functions. Over years, people are gathering knowledge about the structure
and functions of the brain, and this process is accelerating recently along
with the initiation of giant brain projects worldwide. Here, we argue that the
general principles of brain functions are the most valuable things to inspire
the development of AI. These general principles are the standard rules of the
brain extracting, representing, manipulating, and retrieving information, and
here we call them the first principles of the brain. This paper collects six
such first principles. They are attractor network, criticality, random network,
sparse coding, relational memory, and perceptual learning. On each topic, we
review its biological background, fundamental property, potential application
to AI, and future development.Comment: 59 pages, 5 figures, review articl
- …