Whileas the Kohonen Self Organizing Map shows an asymptotic level density
following a power law with a magnification exponent 2/3, it would be desired to
have an exponent 1 in order to provide optimal mapping in the sense of
information theory. In this paper, we study analytically and numerically the
magnification behaviour of the Elastic Net algorithm as a model for
self-organizing feature maps. In contrast to the Kohonen map the Elastic Net
shows no power law, but for onedimensional maps nevertheless the density
follows an universal magnification law, i.e. depends on the local stimulus
density only and is independent on position and decouples from the stimulus
density at other positions.Comment: 8 pages, 10 figures. Link to publisher under
http://link.springer.de/link/service/series/0558/bibs/2415/24150939.ht