79 research outputs found

    Winner-relaxing and winner-enhancing Kohonen maps: Maximal mutual information from enhancing the winner

    Full text link
    The magnification behaviour of a generalized family of self-organizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the one-dimensional case, which can be obtained analytically. The Winner-Enhancing case allows to acheive a magnification exponent of one and therefore provides optimal mapping in the sense of information theory. A numerical verification of the magnification law is included, and the ordering behaviour is analyzed. Compared to the original Self-Organizing Map and some other approaches, the generalized Winner Enforcing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement.Comment: 6 pages, 5 figures. For an extended version refer to cond-mat/0208414 (Neural Computation 17, 996-1009

    Winner-Relaxing Self-Organizing Maps

    Full text link
    A new family of self-organizing maps, the Winner-Relaxing Kohonen Algorithm, is introduced as a generalization of a variant given by Kohonen in 1991. The magnification behaviour is calculated analytically. For the original variant a magnification exponent of 4/7 is derived; the generalized version allows to steer the magnification in the wide range from exponent 1/2 to 1 in the one-dimensional case, thus provides optimal mapping in the sense of information theory. The Winner Relaxing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement.Comment: 14 pages (6 figs included). To appear in Neural Computatio

    Magnification Control in Self-Organizing Maps and Neural Gas

    Get PDF
    We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concave-convex learning, and winner relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the one-dimensional case.Comment: 24 pages, 4 figure

    Magnification Control in Winner Relaxing Neural Gas

    Get PDF
    An important goal in neural map learning, which can conveniently be accomplished by magnification control, is to achieve information optimal coding in the sense of information theory. In the present contribution we consider the winner relaxing approach for the neural gas network. Originally, winner relaxing learning is a slight modification of the self-organizing map learning rule that allows for adjustment of the magnification behavior by an a priori chosen control parameter. We transfer this approach to the neural gas algorithm. The magnification exponent can be calculated analytically for arbitrary dimension from a continuum theory, and the entropy of the resulting map is studied numerically conf irming the theoretical prediction. The influence of a diagonal term, which can be added without impacting the magnification, is studied numerically. This approach to maps of maximal mutual information is interesting for applications as the winner relaxing term only adds computational cost of same order and is easy to implement. In particular, it is not necessary to estimate the generally unknown data probability density as in other magnification control approaches.Comment: 14pages, 2 figure

    Investigation of topographical stability of the concave and convex Self-Organizing Map variant

    Get PDF
    We investigate, by a systematic numerical study, the parameter dependence of the stability of the Kohonen Self-Organizing Map and the Zheng and Greenleaf concave and convex learning with respect to different input distributions, input and output dimensions

    A class of competitive learning models which avoids neuron underutilization problem

    Full text link

    Magnitude Sensitive Competitive Neural Networks

    Get PDF
    En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un tĆ©rmino de magnitud como un factor de modulaciĆ³n de la distancia usada en la competiciĆ³n. Al igual que otros mĆ©todos competitivos, MSCNNs realizan la cuantizaciĆ³n vectorial de los datos, pero el tĆ©rmino de magnitud guĆ­a el entrenamiento de los centroides de modo que se representan con alto detalle las zonas deseadas, definidas por la magnitud. Estas redes se han comparado con otros algoritmos de cuantizaciĆ³n vectorial en diversos ejemplos de interpolaciĆ³n, reducciĆ³n de color, modelado de superficies, clasificaciĆ³n, y varios ejemplos sencillos de demostraciĆ³n. AdemĆ”s se introduce un nuevo algoritmo de compresiĆ³n de imĆ”genes, MSIC (Magnitude Sensitive Image Compression), que hace uso de los algoritmos mencionados previamente, y que consigue una compresiĆ³n de la imagen variable segĆŗn una magnitud definida por el usuario. Los resultados muestran que las nuevas redes neuronales MSCNNs son mĆ”s versĆ”tiles que otros algoritmos de aprendizaje competitivo, y presentan una clara mejora en cuantizaciĆ³n vectorial sobre ellos cuando el dato estĆ” sopesado por una magnitud que indica el ĀæinterĆ©sĀæ de cada muestra

    Financial time series analysis with competitive neural networks

    Full text link
    Lā€™objectif principal de meĢmoire est la modeĢlisation des donneĢes temporelles non stationnaires. Bien que les modeĢ€les statistiques classiques tentent de corriger les donneĢes non stationnaires en diffeĢrenciant et en ajustant pour la tendance, je tente de creĢer des grappes localiseĢes de donneĢes de seĢries temporelles stationnaires graĢ‚ce aĢ€ lā€™algorithme du Ā« self-organizing map Ā». Bien que de nombreuses techniques aient eĢteĢ deĢveloppeĢes pour les seĢries chronologiques aĢ€ lā€™aide du Ā« self- organizing map Ā», je tente de construire un cadre matheĢmatique qui justifie son utilisation dans la preĢvision des seĢries chronologiques financieĢ€res. De plus, je compare les meĢthodes de preĢvision existantes aĢ€ lā€™aide du SOM avec celles pour lesquelles un cadre matheĢmatique a eĢteĢ deĢveloppeĢ et qui nā€™ont pas eĢteĢ appliqueĢes dans un contexte de preĢvision. Je compare ces meĢthodes avec la meĢthode ARIMA bien connue pour la preĢvision des seĢries chronologiques. Le deuxieĢ€me objectif de meĢmoire est de deĢmontrer la capaciteĢ du Ā« self-organizing map Ā» aĢ€ regrouper des donneĢes vectorielles, puisquā€™elle a eĢteĢ deĢveloppeĢe aĢ€ lā€™origine comme un reĢseau neuronal avec lā€™objectif de regroupement. Plus preĢciseĢment, je deĢmontrerai ses capaciteĢs de regroupement sur les donneĢes du Ā« limit order book Ā» et preĢsenterai diverses meĢthodes de visualisation de ses sorties.The main objective of this Masterā€™s thesis is in the modelling of non-stationary time series data. While classical statistical models attempt to correct non- stationary data through differencing and de-trending, I attempt to create localized clusters of stationary time series data through the use of the self-organizing map algorithm. While numerous techniques have been developed that model time series using the self-organizing map, I attempt to build a mathematical framework that justifies its use in the forecasting of financial times series. Additionally, I compare existing forecasting methods using the SOM with those for which a framework has been developed and which have not been applied in a forecasting context. I then compare these methods with the well known ARIMA method of time series forecasting. The second objective of this thesis is to demonstrate the self-organizing mapā€™s ability to cluster data vectors as it was originally developed as a neural network approach to clustering. Specifically I will demonstrate its clustering abilities on limit order book data and present various visualization methods of its output

    Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2

    Get PDF
    Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making
    • ā€¦
    corecore