51,677 research outputs found
Astrophysical Data Analytics based on Neural Gas Models, using the Classification of Globular Clusters as Playground
In Astrophysics, the identification of candidate Globular Clusters through
deep, wide-field, single band HST images, is a typical data analytics problem,
where methods based on Machine Learning have revealed a high efficiency and
reliability, demonstrating the capability to improve the traditional
approaches. Here we experimented some variants of the known Neural Gas model,
exploring both supervised and unsupervised paradigms of Machine Learning, on
the classification of Globular Clusters, extracted from the NGC1399 HST data.
Main focus of this work was to use a well-tested playground to scientifically
validate such kind of models for further extended experiments in astrophysics
and using other standard Machine Learning methods (for instance Random Forest
and Multi Layer Perceptron neural network) for a comparison of performances in
terms of purity and completeness.Comment: Proceedings of the XIX International Conference "Data Analytics and
Management in Data Intensive Domains" (DAMDID/RCDL 2017), Moscow, Russia,
October 10-13, 2017, 8 pages, 4 figure
A new self-organizing neural gas model based on Bregman divergences
In this paper, a new self-organizing neural gas model that we call Growing Hierarchical Bregman Neural
Gas (GHBNG) has been proposed. Our proposal is based on the Growing Hierarchical Neural Gas (GHNG) in which Bregman divergences are incorporated in order to compute the winning neuron. This model has been applied to anomaly detection in video sequences together with a Faster R-CNN as an object detector module. Experimental results not only confirm the effectiveness of the GHBNG for the detection of anomalous object in video sequences but also its selforganization
capabilities.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tec
A Growing Self-Organizing Network for Reconstructing Curves and Surfaces
Self-organizing networks such as Neural Gas, Growing Neural Gas and many
others have been adopted in actual applications for both dimensionality
reduction and manifold learning. Typically, in these applications, the
structure of the adapted network yields a good estimate of the topology of the
unknown subspace from where the input data points are sampled. The approach
presented here takes a different perspective, namely by assuming that the input
space is a manifold of known dimension. In return, the new type of growing
self-organizing network presented gains the ability to adapt itself in way that
may guarantee the effective and stable recovery of the exact topological
structure of the input manifold
Hierarchical growing neural gas
“The original publication is available at www.springerlink.com”. Copyright Springer.This paper describes TreeGNG, a top-down unsupervised learning method that produces hierarchical classification schemes. TreeGNG is an extension to the Growing Neural Gas algorithm that maintains a time history of the learned topological mapping. TreeGNG is able to correct poor decisions made during the early phases of the construction of the tree, and provides the novel ability to influence the general shape and form of the learned hierarchy
Advances in Self Organising Maps
The Self-Organizing Map (SOM) with its related extensions is the most popular
artificial neural algorithm for use in unsupervised learning, clustering,
classification and data visualization. Over 5,000 publications have been
reported in the open literature, and many commercial projects employ the SOM as
a tool for solving hard real-world problems. Each two years, the "Workshop on
Self-Organizing Maps" (WSOM) covers the new developments in the field. The WSOM
series of conferences was initiated in 1997 by Prof. Teuvo Kohonen, and has
been successfully organized in 1997 and 1999 by the Helsinki University of
Technology, in 2001 by the University of Lincolnshire and Humberside, and in
2003 by the Kyushu Institute of Technology. The Universit\'{e} Paris I
Panth\'{e}on Sorbonne (SAMOS-MATISSE research centre) organized WSOM 2005 in
Paris on September 5-8, 2005.Comment: Special Issue of the Neural Networks Journal after WSOM 05 in Pari
Magnification Control in Self-Organizing Maps and Neural Gas
We consider different ways to control the magnification in self-organizing
maps (SOM) and neural gas (NG). Starting from early approaches of magnification
control in vector quantization, we then concentrate on different approaches for
SOM and NG. We show that three structurally similar approaches can be applied
to both algorithms: localized learning, concave-convex learning, and winner
relaxing learning. Thereby, the approach of concave-convex learning in SOM is
extended to a more general description, whereas the concave-convex learning for
NG is new. In general, the control mechanisms generate only slightly different
behavior comparing both neural algorithms. However, we emphasize that the NG
results are valid for any data dimension, whereas in the SOM case the results
hold only for the one-dimensional case.Comment: 24 pages, 4 figure
- …