10 research outputs found
Self-Organizing Maps with Variable Input Length for Motif Discovery and Word Segmentation
Time Series Motif Discovery (TSMD) is defined as searching for patterns that
are previously unknown and appear with a given frequency in time series.
Another problem strongly related with TSMD is Word Segmentation. This problem
has received much attention from the community that studies early language
acquisition in babies and toddlers. The development of biologically plausible
models for word segmentation could greatly advance this field. Therefore, in
this article, we propose the Variable Input Length Map (VILMAP) for Motif
Discovery and Word Segmentation. The model is based on the Self-Organizing Maps
and can identify Motifs with different lengths in time series. In our
experiments, we show that VILMAP presents good results in finding Motifs in a
standard Motif discovery dataset and can avoid catastrophic forgetting when
trained with datasets with increasing values of input size. We also show that
VILMAP achieves results similar or superior to other methods in the literature
developed for the task of word segmentation
A Semi-Supervised Self-Organizing Map with Adaptive Local Thresholds
In the recent years, there is a growing interest in semi-supervised learning,
since, in many learning tasks, there is a plentiful supply of unlabeled data,
but insufficient labeled ones. Hence, Semi-Supervised learning models can
benefit from both types of data to improve the obtained performance. Also, it
is important to develop methods that are easy to parameterize in a way that is
robust to the different characteristics of the data at hand. This article
presents a new method based on Self-Organizing Map (SOM) for clustering and
classification, called Adaptive Local Thresholds Semi-Supervised
Self-Organizing Map (ALTSS-SOM). It can dynamically switch between two forms of
learning at training time, according to the availability of labels, as in
previous models, and can automatically adjust itself to the local variance
observed in each data cluster. The results show that the ALTSS-SOM surpass the
performance of other semi-supervised methods in terms of classification, and
other pure clustering methods when there are no labels available, being also
less sensitive than previous methods to the parameters values
A Semi-Supervised Self-Organizing Map for Clustering and Classification
There has been an increasing interest in semi-supervised learning in the
recent years because of the great number of datasets with a large number of
unlabeled data but only a few labeled samples. Semi-supervised learning
algorithms can work with both types of data, combining them to obtain better
performance for both clustering and classification. Also, these datasets
commonly have a high number of dimensions. This article presents a new
semi-supervised method based on self-organizing maps (SOMs) for clustering and
classification, called Semi-Supervised Self-Organizing Map (SS-SOM). The method
can dynamically switch between supervised and unsupervised learning during the
training according to the availability of the class labels for each pattern.
Our results show that the SS-SOM outperforms other semi-supervised methods in
conditions in which there is a low amount of labeled samples, also achieving
good results when all samples are labeled
MOEA/D with Uniformly Randomly Adaptive Weights
When working with decomposition-based algorithms, an appropriate set of
weights might improve quality of the final solution. A set of uniformly
distributed weights usually leads to well-distributed solutions on a Pareto
front. However, there are two main difficulties with this approach. Firstly, it
may fail depending on the problem geometry. Secondly, the population size
becomes not flexible as the number of objectives increases. In this paper, we
propose the MOEA/D with Uniformly Randomly Adaptive Weights (MOEA/DURAW) which
uses the Uniformly Randomly method as an approach to subproblems generation,
allowing a flexible population size even when working with many objective
problems. During the evolutionary process, MOEA/D-URAW adds and removes
subproblems as a function of the sparsity level of the population. Moreover,
instead of requiring assumptions about the Pareto front shape, our method
adapts its weights to the shape of the problem during the evolutionary process.
Experimental results using WFG41-48 problem classes, with different Pareto
front shapes, shows that the present method presents better or equal results in
77.5% of the problems evaluated from 2 to 6 objectives when compared with
state-of-the-art methods in the literature