WeVoS scale invariant map

Abstract

A novel method for improving the training of some topology preserving algorithms characterized by its scale invariant mapping is presented and analyzed in this study. It is called Weighted Voting Superposition (WeVoS), and in this research is applied to the Scale Invariant Feature Map (SIM) and the Maximum Likelihood Hebbian Learning Scale Invariant Map (Max-SIM) providing two new versions, the WeVoS–SIM and the WeVoS–Max-SIM. The method is based on the training of an ensemble of networks and the combination of them to obtain a single one, including the best features of each one of the networks in the ensemble. To accomplish this combination, a weighted voting process takes place between the units of the maps in the ensemble in order to determine the characteristics of the units of the resulting map. To provide a complete comparative study of these new models, they are compared with their original models, the SIM and Max-SIM and also to probably the best known topology preserving model: the Self-Organizing Map. The models are tested under the frame of two ad hoc artificial data sets and a real-world one, characterized for having an internal radial distribution. Four different quality measures have been applied for each model in order to present a complete study of their capabilities. The results obtained confirm that the novel models presented in this study based on the application of WeVoS can outperform the classic models in terms of organization of the presented information

    Similar works