37,265 research outputs found
Investigation of topographical stability of the concave and convex Self-Organizing Map variant
We investigate, by a systematic numerical study, the parameter dependence of
the stability of the Kohonen Self-Organizing Map and the Zheng and Greenleaf
concave and convex learning with respect to different input distributions,
input and output dimensions
Evaluating a Self-Organizing Map for Clustering and Visualizing Optimum Currency Area Criteria
Optimum currency area (OCA) theory attempts to define the geographical region in which it would maximize economic efficiency to have a single currency. In this paper, the focus is on prospective and current members of the Economic and Monetary Union. For this task, a self-organizing neural network, the Self-organizing map (SOM), is combined with hierarchical clustering for a two-level approach to clustering and visualizing OCA criteria. The output of the SOM is a topologically preserved two-dimensional grid. The final models are evaluated based on both clustering tendencies and accuracy measures. Thereafter, the two-dimensional grid of the chosen model is used for visual assessment of the OCA criteria, while its clustering results are projected onto a geographic map.Self-organizing maps, Optimum Currency Area, projection, clustering, geospatial visualization
Winner-Relaxing Self-Organizing Maps
A new family of self-organizing maps, the Winner-Relaxing Kohonen Algorithm,
is introduced as a generalization of a variant given by Kohonen in 1991. The
magnification behaviour is calculated analytically. For the original variant a
magnification exponent of 4/7 is derived; the generalized version allows to
steer the magnification in the wide range from exponent 1/2 to 1 in the
one-dimensional case, thus provides optimal mapping in the sense of information
theory. The Winner Relaxing Algorithm requires minimal extra computations per
learning step and is conveniently easy to implement.Comment: 14 pages (6 figs included). To appear in Neural Computatio
Winner-relaxing and winner-enhancing Kohonen maps: Maximal mutual information from enhancing the winner
The magnification behaviour of a generalized family of self-organizing
feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is
analyzed by the magnification law in the one-dimensional case, which can be
obtained analytically. The Winner-Enhancing case allows to acheive a
magnification exponent of one and therefore provides optimal mapping in the
sense of information theory. A numerical verification of the magnification law
is included, and the ordering behaviour is analyzed. Compared to the original
Self-Organizing Map and some other approaches, the generalized Winner Enforcing
Algorithm requires minimal extra computations per learning step and is
conveniently easy to implement.Comment: 6 pages, 5 figures. For an extended version refer to cond-mat/0208414
(Neural Computation 17, 996-1009
Magnification Control in Self-Organizing Maps and Neural Gas
We consider different ways to control the magnification in self-organizing
maps (SOM) and neural gas (NG). Starting from early approaches of magnification
control in vector quantization, we then concentrate on different approaches for
SOM and NG. We show that three structurally similar approaches can be applied
to both algorithms: localized learning, concave-convex learning, and winner
relaxing learning. Thereby, the approach of concave-convex learning in SOM is
extended to a more general description, whereas the concave-convex learning for
NG is new. In general, the control mechanisms generate only slightly different
behavior comparing both neural algorithms. However, we emphasize that the NG
results are valid for any data dimension, whereas in the SOM case the results
hold only for the one-dimensional case.Comment: 24 pages, 4 figure
- âŠ