1,704 research outputs found
A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications
This survey samples from the ever-growing family of adaptive resonance theory
(ART) neural network models used to perform the three primary machine learning
modalities, namely, unsupervised, supervised and reinforcement learning. It
comprises a representative list from classic to modern ART models, thereby
painting a general picture of the architectures developed by researchers over
the past 30 years. The learning dynamics of these ART models are briefly
described, and their distinctive characteristics such as code representation,
long-term memory and corresponding geometric interpretation are discussed.
Useful engineering properties of ART (speed, configurability, explainability,
parallelization and hardware implementation) are examined along with current
challenges. Finally, a compilation of online software libraries is provided. It
is expected that this overview will be helpful to new and seasoned ART
researchers
Neuroengineering of Clustering Algorithms
Cluster analysis can be broadly divided into multivariate data visualization, clustering algorithms, and cluster validation. This dissertation contributes neural network-based techniques to perform all three unsupervised learning tasks. Particularly, the first paper provides a comprehensive review on adaptive resonance theory (ART) models for engineering applications and provides context for the four subsequent papers. These papers are devoted to enhancements of ART-based clustering algorithms from (a) a practical perspective by exploiting the visual assessment of cluster tendency (VAT) sorting algorithm as a preprocessor for ART offline training, thus mitigating ordering effects; and (b) an engineering perspective by designing a family of multi-criteria ART models: dual vigilance fuzzy ART and distributed dual vigilance fuzzy ART (both of which are capable of detecting complex cluster structures), merge ART (aggregates partitions and lessens ordering effects in online learning), and cluster validity index vigilance in fuzzy ART (features a robust vigilance parameter selection and alleviates ordering effects in offline learning). The sixth paper consists of enhancements to data visualization using self-organizing maps (SOMs) by depicting in the reduced dimension and topology-preserving SOM grid information-theoretic similarity measures between neighboring neurons. This visualization\u27s parameters are estimated using samples selected via a single-linkage procedure, thereby generating heatmaps that portray more homogeneous within-cluster similarities and crisper between-cluster boundaries. The seventh paper presents incremental cluster validity indices (iCVIs) realized by (a) incorporating existing formulations of online computations for clusters\u27 descriptors, or (b) modifying an existing ART-based model and incrementally updating local density counts between prototypes. Moreover, this last paper provides the first comprehensive comparison of iCVIs in the computational intelligence literature --Abstract, page iv
Learning as a Nonlinear Line of Attraction for Pattern Association, Classification and Recognition
Development of a mathematical model for learning a nonlinear line of attraction is presented in this dissertation, in contrast to the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete location in state space. A nonlinear line of attraction is the encapsulation of attractive fixed points scattered in state space as an attractive nonlinear line, describing patterns with similar characteristics as a family of patterns.
It is usually of prime imperative to guarantee the convergence of the dynamics of the recurrent network for associative learning and recall. We propose to alter this picture. That is, if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented by an unknown encoded representation of a visual image. The conception of the dynamics of the nonlinear line attractor network to operate between stable and unstable states is the second contribution in this dissertation research. These criteria can be used to circumvent the plasticity-stability dilemma by using the unstable state as an indicator to create a new line for an unfamiliar pattern. This novel learning strategy utilizes stability (convergence) and instability (divergence) criteria of the designed dynamics to induce self-organizing behavior. The self-organizing behavior of the nonlinear line attractor model can manifest complex dynamics in an unsupervised manner.
The third contribution of this dissertation is the introduction of the concept of manifold of color perception.
The fourth contribution of this dissertation is the development of a nonlinear dimensionality reduction technique by embedding a set of related observations into a low-dimensional space utilizing the result attained by the learned memory matrices of the nonlinear line attractor network.
Development of a system for affective states computation is also presented in this dissertation. This system is capable of extracting the user\u27s mental state in real time using a low cost computer. It is successfully interfaced with an advanced learning environment for human-computer interaction
Recommended from our members
Brain network mechanisms in learning behavior
The study of learning has been a central focus of psychology and neuroscience since their inception. Cognitive neuroscience’s traditional approach to understanding learn-ing has been to decompose it into discrete cognitive processes with separable and localized underlying neural systems. While this focus on modular cognitive functions for individual brain areas has led to considerable progress, there is increasing evidence that much of learn-ing behavior relies on overlapping cognitive and neural systems, which may be harder to disentangle than previously envisioned. This is not surprising, as the processes underlying learning must involve widespread integration of information from sensory, affective, and motor sources. The standard tools of cognitive neuroscience limit our ability to describe processes that rely on widespread coordination of brain activity. To understand learning, it will be necessary to characterize dynamic co-activation at the circuit level.
In this dissertation, I present three studies that seek to describe the roles of distrib-uted brain networks in learning. I begin by giving an overview of our current understand-ing of multiple forms of learning, describing the neural and computational mechanisms thought to underlie incremental feedback-based learning and flexible episodic memory. I will focus in particular on the difficulties in separating these processes at the cognitive level and in localizing them to individual regions at the neural level. I will then describe recent findings that have begun to characterize the brain’s large-scale network structure, emphasiz-ing the potential roles that distributed networks could play in understanding learning and cognition more generally. I will end the introduction by reviewing current attempts to char-acterize the dynamics of large-scale brain networks, which will be essential for providing a mechanistic link to learning behavior.
Chapter 2 is a study demonstrating that intrinsic connectivity between the hippo-campus and the ventromedial prefrontal cortex, as well as between these regions and dis-tributed brain networks, is related to individual differences in the transfer of learning on a sensory preconditioning task. The hippocampus and ventromedial prefrontal cortex have both been shown to be involved in this type of learning, and this study represents an early attempt to link connectivity between individual regions and broader networks to learning processes.
Chapter 3 is a study that takes advantage of recent developments in mathematical modeling of temporal networks to demonstrate a relationship between large-scale network dynamics and reinforcement learning within individuals. This study shows that the flexibil-ity of network connectivity in the striatum is related to learning performance over time, as well as to individual differences in parameters estimated from computational models of re-inforcement learning. Notably, connectivity between the striatum and visual as well as or-bitofrontal regions increased over the course of the task, which is consistent with an inte-grative role for the region in learning value-based associations. Network flexibility in a dis-tinct set of regions is associated with episodic memory for object images presented during the learning task.
Chapter 4 examines the role of dopamine, a neurotransmitter strongly linked to val-ue updating in reinforcement learning, in the dynamic network changes occurring during learning. Patients with Parkinson’s disease, who experience a loss of dopaminergic neu-rons in the substantia nigra, performed a reversal-learning task while undergoing functional magnetic resonance imaging. Patients were scanned on and off of a dopamine precursor medication (levodopa) in a within-subject design in order to examine the impact of dopa-mine on brain network dynamics during learning. The reversal provided an experimental manipulation of dynamic connectivity, and patients on medication showed greater modula-tion of striatal-cortical connectivity. Similar results were found in a number of regions re-ceiving midbrain projections including the prefrontal cortex and medial temporal lobe. This study indicates that dopamine inputs from the midbrain modulate large-scale network dy-namics during learning, providing a direct link between reinforcement learning theories of value updating and network neuroscience accounts of dynamic connectivity.
Together, these results indicate that large-scale networks play a critical role in multi-ple forms of learning behavior. Each highlights the potential importance of understanding dynamic routing and integration of information across large-scale circuits for our concep-tion of learning and other cognitive processes. Understanding the when, where, and how of this information flow in the brain may provide an alternative or compliment to traditional theories of distinct learning systems. These studies also illustrate challenges in integrating this perspective with established theories in cognitive neuroscience. Chapter 5 will situate the studies in a broader discussion of how brain activity relates to cognition in general, while pointing out current roadblocks and potential ways forward for a cognitive network neuroscience of learning
A Novel Synergistic Model Fusing Electroencephalography and Functional Magnetic Resonance Imaging for Modeling Brain Activities
Study of the human brain is an important and very active area of research. Unraveling the way the human brain works would allow us to better understand, predict and prevent brain related diseases that affect a significant part of the population. Studying the brain response to certain input stimuli can help us determine the involved brain areas and understand the mechanisms that characterize behavioral and psychological traits.
In this research work two methods used for the monitoring of brain activities, Electroencephalography (EEG) and functional Magnetic Resonance (fMRI) have been studied for their fusion, in an attempt to bridge together the advantages of each one. In particular, this work has focused in the analysis of a specific type of EEG and fMRI recordings that are related to certain events and capture the brain response under specific experimental conditions.
Using spatial features of the EEG we can describe the temporal evolution of the electrical field recorded in the scalp of the head. This work introduces the use of Hidden Markov Models (HMM) for modeling the EEG dynamics. This novel approach is applied for the discrimination of normal and progressive Mild Cognitive Impairment patients with significant results.
EEG alone is not able to provide the spatial localization needed to uncover and understand the neural mechanisms and processes of the human brain. Functional Magnetic Resonance imaging (fMRI) provides the means of localizing functional activity, without though, providing the timing details of these activations. Although, at first glance it is apparent that the strengths of these two modalities, EEG and fMRI, complement each other, the fusion of information provided from each one is a challenging task. A novel methodology for fusing EEG spatiotemporal features and fMRI features, based on Canonical Partial Least Squares (CPLS) is presented in this work. A HMM modeling approach is used in order to derive a novel feature-based representation of the EEG signal that characterizes the topographic information of the EEG. We use the HMM model in order to project the EEG data in the Fisher score space and use the Fisher score to describe the dynamics of the EEG topography sequence. The correspondence between this new feature and the fMRI is studied using CPLS. This methodology is applied for extracting features for the classification of a visual task. The results indicate that the proposed methodology is able to capture task related activations that can be used for the classification of mental tasks. Extensions on the proposed models are examined along with future research directions and applications
- …