4 research outputs found
An Ensemble Semi-Supervised Adaptive Resonance Theory Model with Explanation Capability for Pattern Classification
Most semi-supervised learning (SSL) models entail complex structures and
iterative training processes as well as face difficulties in interpreting their
predictions to users. To address these issues, this paper proposes a new
interpretable SSL model using the supervised and unsupervised Adaptive
Resonance Theory (ART) family of networks, which is denoted as SSL-ART.
Firstly, SSL-ART adopts an unsupervised fuzzy ART network to create a number of
prototype nodes using unlabeled samples. Then, it leverages a supervised fuzzy
ARTMAP structure to map the established prototype nodes to the target classes
using labeled samples. Specifically, a one-to-many (OtM) mapping scheme is
devised to associate a prototype node with more than one class label. The main
advantages of SSL-ART include the capability of: (i) performing online
learning, (ii) reducing the number of redundant prototype nodes through the OtM
mapping scheme and minimizing the effects of noisy samples, and (iii) providing
an explanation facility for users to interpret the predicted outcomes. In
addition, a weighted voting strategy is introduced to form an ensemble SSL-ART
model, which is denoted as WESSL-ART. Every ensemble member, i.e., SSL-ART,
assigns {\color{black}a different weight} to each class based on its
performance pertaining to the corresponding class. The aim is to mitigate the
effects of training data sequences on all SSL-ART members and improve the
overall performance of WESSL-ART. The experimental results on eighteen
benchmark data sets, three artificially generated data sets, and a real-world
case study indicate the benefits of the proposed SSL-ART and WESSL-ART models
for tackling pattern classification problems.Comment: 13 pages, 8 figure
A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications
This survey samples from the ever-growing family of adaptive resonance theory
(ART) neural network models used to perform the three primary machine learning
modalities, namely, unsupervised, supervised and reinforcement learning. It
comprises a representative list from classic to modern ART models, thereby
painting a general picture of the architectures developed by researchers over
the past 30 years. The learning dynamics of these ART models are briefly
described, and their distinctive characteristics such as code representation,
long-term memory and corresponding geometric interpretation are discussed.
Useful engineering properties of ART (speed, configurability, explainability,
parallelization and hardware implementation) are examined along with current
challenges. Finally, a compilation of online software libraries is provided. It
is expected that this overview will be helpful to new and seasoned ART
researchers
Semi-supervised topo-Bayesian ARTMAP for noisy data
This paper presents a novel semi-supervised ART network that inherits the ability of noise insensitivity, topology learning, and incremental learning from the Bayesian ARTMAP. It is combined with a label prediction strategy based on a clustering technique to determine the neighboring neurons. The procedure of updating Bayesian ARTMAP is modified to allow the network in altering the learning rate. This results in a classifier that works online and lifts several limitations of the original Bayesian ARTMAP. It processes arbitrarily scaled values even when their range is not entirely known in advance. The classifier has the capability to be employed in online learning applications, in which no prior-knowledge about the structure and distribution of data is available. Experimental results indicate good results, even with noisy data