2,730 research outputs found

    Magnification Control in Self-Organizing Maps and Neural Gas

    Get PDF
    We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concave-convex learning, and winner relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the one-dimensional case.Comment: 24 pages, 4 figure

    Neural Networks: Implementations and Applications

    Get PDF
    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering area

    Forecasting the CATS benchmark with the Double Vector Quantization method

    Full text link
    The Double Vector Quantization method, a long-term forecasting method based on the SOM algorithm, has been used to predict the 100 missing values of the CATS competition data set. An analysis of the proposed time series is provided to estimate the dimension of the auto-regressive part of this nonlinear auto-regressive forecasting method. Based on this analysis experimental results using the Double Vector Quantization (DVQ) method are presented and discussed. As one of the features of the DVQ method is its ability to predict scalars as well as vectors of values, the number of iterative predictions needed to reach the prediction horizon is further observed. The method stability for the long term allows obtaining reliable values for a rather long-term forecasting horizon.Comment: Accepted for publication in Neurocomputing, Elsevie

    On the use of self-organizing maps to accelerate vector quantization

    Full text link
    Self-organizing maps (SOM) are widely used for their topology preservation property: neighboring input vectors are quantified (or classified) either on the same location or on neighbor ones on a predefined grid. SOM are also widely used for their more classical vector quantization property. We show in this paper that using SOM instead of the more classical Simple Competitive Learning (SCL) algorithm drastically increases the speed of convergence of the vector quantization process. This fact is demonstrated through extensive simulations on artificial and real examples, with specific SOM (fixed and decreasing neighborhoods) and SCL algorithms.Comment: A la suite de la conference ESANN 199

    Magnification Control in Winner Relaxing Neural Gas

    Get PDF
    An important goal in neural map learning, which can conveniently be accomplished by magnification control, is to achieve information optimal coding in the sense of information theory. In the present contribution we consider the winner relaxing approach for the neural gas network. Originally, winner relaxing learning is a slight modification of the self-organizing map learning rule that allows for adjustment of the magnification behavior by an a priori chosen control parameter. We transfer this approach to the neural gas algorithm. The magnification exponent can be calculated analytically for arbitrary dimension from a continuum theory, and the entropy of the resulting map is studied numerically conf irming the theoretical prediction. The influence of a diagonal term, which can be added without impacting the magnification, is studied numerically. This approach to maps of maximal mutual information is interesting for applications as the winner relaxing term only adds computational cost of same order and is easy to implement. In particular, it is not necessary to estimate the generally unknown data probability density as in other magnification control approaches.Comment: 14pages, 2 figure
    • …
    corecore