1,633 research outputs found

    Magnification Control in Self-Organizing Maps and Neural Gas

    Get PDF
    We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concave-convex learning, and winner relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the one-dimensional case.Comment: 24 pages, 4 figure

    Magnification Control in Winner Relaxing Neural Gas

    Get PDF
    An important goal in neural map learning, which can conveniently be accomplished by magnification control, is to achieve information optimal coding in the sense of information theory. In the present contribution we consider the winner relaxing approach for the neural gas network. Originally, winner relaxing learning is a slight modification of the self-organizing map learning rule that allows for adjustment of the magnification behavior by an a priori chosen control parameter. We transfer this approach to the neural gas algorithm. The magnification exponent can be calculated analytically for arbitrary dimension from a continuum theory, and the entropy of the resulting map is studied numerically conf irming the theoretical prediction. The influence of a diagonal term, which can be added without impacting the magnification, is studied numerically. This approach to maps of maximal mutual information is interesting for applications as the winner relaxing term only adds computational cost of same order and is easy to implement. In particular, it is not necessary to estimate the generally unknown data probability density as in other magnification control approaches.Comment: 14pages, 2 figure

    Winner-Relaxing Self-Organizing Maps

    Full text link
    A new family of self-organizing maps, the Winner-Relaxing Kohonen Algorithm, is introduced as a generalization of a variant given by Kohonen in 1991. The magnification behaviour is calculated analytically. For the original variant a magnification exponent of 4/7 is derived; the generalized version allows to steer the magnification in the wide range from exponent 1/2 to 1 in the one-dimensional case, thus provides optimal mapping in the sense of information theory. The Winner Relaxing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement.Comment: 14 pages (6 figs included). To appear in Neural Computatio

    Winner-relaxing and winner-enhancing Kohonen maps: Maximal mutual information from enhancing the winner

    Full text link
    The magnification behaviour of a generalized family of self-organizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the one-dimensional case, which can be obtained analytically. The Winner-Enhancing case allows to acheive a magnification exponent of one and therefore provides optimal mapping in the sense of information theory. A numerical verification of the magnification law is included, and the ordering behaviour is analyzed. Compared to the original Self-Organizing Map and some other approaches, the generalized Winner Enforcing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement.Comment: 6 pages, 5 figures. For an extended version refer to cond-mat/0208414 (Neural Computation 17, 996-1009

    Investigation of topographical stability of the concave and convex Self-Organizing Map variant

    Get PDF
    We investigate, by a systematic numerical study, the parameter dependence of the stability of the Kohonen Self-Organizing Map and the Zheng and Greenleaf concave and convex learning with respect to different input distributions, input and output dimensions

    Some Further Evidence about Magnification and Shape in Neural Gas

    Full text link
    Neural gas (NG) is a robust vector quantization algorithm with a well-known mathematical model. According to this, the neural gas samples the underlying data distribution following a power law with a magnification exponent that depends on data dimensionality only. The effects of shape in the input data distribution, however, are not entirely covered by the NG model above, due to the technical difficulties involved. The experimental work described here shows that shape is indeed relevant in determining the overall NG behavior; in particular, some experiments reveal richer and complex behaviors induced by shape that cannot be explained by the power law alone. Although a more comprehensive analytical model remains to be defined, the evidence collected in these experiments suggests that the NG algorithm has an interesting potential for detecting complex shapes in noisy datasets

    Auto-SOM: recursive parameter estimation for guidance of self-organizing feature maps

    Get PDF
    An important technique for exploratory data analysis is to forma mapping from the high-dimensional data space to a low-dimensional representation space such that neighborhoods are preserved. A popular method for achieving this is Kohonen's self-organizing map (SOM) algorithm. However, in its original form, this requires the user to choose the values of several parameters heuristically to achieve good performance. Here we present the Auto-SOM, an algorithm that estimates the learning parameters during the training of SOMs automatically. The application of Auto-SOM provides the facility to avoid neighborhood violations up to a user-defined degree in either mapping direction. Auto-SOM consists of a Kalman filter implementation of the SOM coupled with a recursive parameter estimation method. The Kalman filter trains the neurons' weights with estimated learning coefficients so as to minimize the variance of the estimation error. The recursive parameter estimation method estimates the width of the neighborhood function by minimizing the prediction error variance of the Kalman filter. In addition, the "topographic function" is incorporated to measure neighborhood violations and prevent the map's converging to configurations with neighborhood violations. It is demonstrated that neighborhoods can be preserved in both mapping directions as desired for dimension-reducing applications. The development of neighborhood-preserving maps and their convergence behavior is demonstrated by three examples accounting for the basic applications of self-organizing feature maps

    Study of the magnification effect on self-organizing maps

    Get PDF
    Self-Organizing Maps (SOM), are a type of neuronal network (Kohonen, 1982b) that has been used mainly in data clustering problems, using unsupervised learning. Among the multiple areas of application, SOM has been used in various problems of direct interest to the Navy (V. J. Lobo, 2009), including route planning and the location of critical infrastructures. The SOM has also been used to sample large databases. In this sort of application, they have a behaviour called the magnification effect (Bauer & Der, 1996), which causes areas of the attribute space of data with less density to be overrepresented or magnified. This dissertation uses an experimental approach to mitigate the lack of theoretical explanation for this effect except for one-dimensional and quite simple cases. From experimental evidence obtained for carefully designed problems we infer a relationship between input data densities and output neuron densities that can be applied universally, or at least in a broad set of situations. A large number of experiments were conducted using one-dimensional to one-dimensional mappings followed by 2D to 2D, 3D to 1, 2 and 3D. We derived an empirical relationship whereby the density in the output space is equal to a constant times the density of the input space raised to the power of (alpha) which although depending on a number of factors can be approximated by the root index n of 2/3 where n is the input space dimension. The correlation that we found in our experiments, for both the well-known 1- dimensional case and for more general 2 to 3-dimensional cases is a useful guide to predict the magnification effect in practical situations.Therefore, in chapter 4 we produce a populational cartogram of Angola and we prove that our relation can be used to correct the magnification effect on 2-dimensional cases.Os mapas auto-organizados ou SOM (Self Organizing Maps), são um tipo de rede neuronal (Kohonen, 1982) que tem sido utilizada sobretudo em problemas agrupamento de dados (clustering), usando aprendizagem não supervisionada. Entre as múltiplas áreas de aplicação, os SOM têm sido usados em vários problemas com interesse direto para a Marinha (Lobo, 2009), incluindo o planeamento de rotas e a localização de infraestruturas críticas. Os SOM também têm sido usados para fazer amostragem de grandes bases de dados, e nesse tipo de aplicações têm um comportamento, denominado efeito de magnificação (Bauer & R. Der, 1996), que faz com que zonas do espaço de atributos dos dados com menor densidade sejam sobre representadas, ou seja magnificadas. Esta dissertação traz uma abordagem experimental para mitigar a falta de explicação teórica para este efeito, com exceção de casos unidimensionais e bastante simples. A partir de provas experimentais obtidas para problemas cuidadosamente concebidos, inferimos uma relação entre densidades de dados de entrada e densidades de neuronios à saída que podem ser aplicadas universalmente, ou pelo menos num conjunto alargado de situações. Foram realizadas um grande numero de experiências usando mapeamentos unidimensionais para mapeamentos unidimensionais seguidos por 2D para 2D, 3D para 1, 2 e 3D. Derivamos uma relação empírica em que a densidade no espaço de saída é igual a uma constante vezes a densidade do espaço de entrada elevada a (alpha) que, embora dependendo de uma série de fatores, pode ser aproximado pela raiz de índice n de 2/3 onde n é a dimensão do espaço de entrada. A correlação que encontramos nas nossas experiências, tanto para o caso unidimensional bem como para casos mais gerais de 2 a 3 dimensões é um guia útil para prever o efeito de magnificação em situações práticas. No capítulo 4 produzimos um cartograma populacional de Angola e provamos que a nossa relação pode ser usada para corrigir o efeito de magnificação em casos bidimensionais

    Magnitude Sensitive Competitive Neural Networks

    Get PDF
    En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un término de magnitud como un factor de modulación de la distancia usada en la competición. Al igual que otros métodos competitivos, MSCNNs realizan la cuantización vectorial de los datos, pero el término de magnitud guía el entrenamiento de los centroides de modo que se representan con alto detalle las zonas deseadas, definidas por la magnitud. Estas redes se han comparado con otros algoritmos de cuantización vectorial en diversos ejemplos de interpolación, reducción de color, modelado de superficies, clasificación, y varios ejemplos sencillos de demostración. Además se introduce un nuevo algoritmo de compresión de imágenes, MSIC (Magnitude Sensitive Image Compression), que hace uso de los algoritmos mencionados previamente, y que consigue una compresión de la imagen variable según una magnitud definida por el usuario. Los resultados muestran que las nuevas redes neuronales MSCNNs son más versátiles que otros algoritmos de aprendizaje competitivo, y presentan una clara mejora en cuantización vectorial sobre ellos cuando el dato está sopesado por una magnitud que indica el ¿interés¿ de cada muestra

    A sequential algorithm for training the SOM prototypes based on higher-order recursive equations

    Get PDF
    A novel training algorithm is proposed for the formation of Self-Organizing Maps (SOM). In the proposed model, the weights are updated incrementally by using a higher-order difference equation, which implements a low-pass digital filter. It is possible to improve selected features of the self-organization process with respect to the basic SOM by suitably designing the filter. Moreover, from this model, new visualization tools can be derived for cluster visualization and for monitoring the quality of the map
    corecore