2,122 research outputs found

    Self-Organizing Map with False Neighbor Degree between Neurons for Effective Self-Organization

    Get PDF
    In the real world, it is not always true that the nextdoor house is close to my house, in other words, "neighbors" are not always "true neighbors". In this study, we propose a new Self-Organizing Map (SOM) algorithm, SOM with False Neighbor degree between neurons (called FN-SOM). The behavior of FN-SOM is investigated with learning for various input data. We confirm that FN-SOM can obtain the more effective map reflecting the distribution state of input data than the conventional SOM and Growing Grid

    A sequential algorithm for training the SOM prototypes based on higher-order recursive equations

    Get PDF
    A novel training algorithm is proposed for the formation of Self-Organizing Maps (SOM). In the proposed model, the weights are updated incrementally by using a higher-order difference equation, which implements a low-pass digital filter. It is possible to improve selected features of the self-organization process with respect to the basic SOM by suitably designing the filter. Moreover, from this model, new visualization tools can be derived for cluster visualization and for monitoring the quality of the map

    Combining Multiple Clusterings via Crowd Agreement Estimation and Multi-Granularity Link Analysis

    Full text link
    The clustering ensemble technique aims to combine multiple clusterings into a probably better and more robust clustering and has been receiving an increasing attention in recent years. There are mainly two aspects of limitations in the existing clustering ensemble approaches. Firstly, many approaches lack the ability to weight the base clusterings without access to the original data and can be affected significantly by the low-quality, or even ill clusterings. Secondly, they generally focus on the instance level or cluster level in the ensemble system and fail to integrate multi-granularity cues into a unified model. To address these two limitations, this paper proposes to solve the clustering ensemble problem via crowd agreement estimation and multi-granularity link analysis. We present the normalized crowd agreement index (NCAI) to evaluate the quality of base clusterings in an unsupervised manner and thus weight the base clusterings in accordance with their clustering validity. To explore the relationship between clusters, the source aware connected triple (SACT) similarity is introduced with regard to their common neighbors and the source reliability. Based on NCAI and multi-granularity information collected among base clusterings, clusters, and data instances, we further propose two novel consensus functions, termed weighted evidence accumulation clustering (WEAC) and graph partitioning with multi-granularity link analysis (GP-MGLA) respectively. The experiments are conducted on eight real-world datasets. The experimental results demonstrate the effectiveness and robustness of the proposed methods.Comment: The MATLAB source code of this work is available at: https://www.researchgate.net/publication/28197031

    A Multimodel Approach for Complex Systems Modeling based on Classification Algorithms

    Get PDF
    In this paper, a new multimodel approach for complex systems modeling based on classification algorithms is presented. It requires firstly the determination of the model-base. For this, the number of models is selected via a neural network and a rival penalized competitive learning (RPCL), and the operating clusters are identified by using the fuzzy K-means algorithm. The obtained results are then exploited for the parametric identification of the models. The second step consists in validating the proposed model-base by using the adequate method of validity computation. Two examples are presented in this paper which show the efficiency of the proposed approach

    Institutions and Policies Shaping Industrial Development: An Introductory Note

    Get PDF
    In this work, meant as an introduction to the contributions of the task force on Industrial Policies and Development, Initiative for Policy Dialogue, Columbia University, New York, we discuss the role of institutions and policies in the process of development. We begin by arguing how misleading the "market failure" language can be in order to assess the necessity of public policies in that it evaluates it against a yardstick that is hardly met by any observed market set-up. Much nearer to the empirical evidence we argue that even when one encounters a prevailing market form of governance of economic interactions, the latter are embedded in a rich thread of non-market institutions. This applies in general and is particularly so with respect to the production and use of information and technological knowledge. In this work we build on the fundamental institutional embeddedness of such processes of technological learning in both developed and catching-up countries and we try to identify some quite robust policy ingredients which have historically accompanied the co-evolution between technological capabilities, forms of corporate organisations and incentive structures. All experiences of successful catching-up and sometimes overtaking the incumbent economic leaders – starting with the USA vis-à-vis Britain – have involved “institution building” and policy measures affecting technological imitation, the organisations of industries, trade patterns and intellectual property rights. This is likely to apply today, too, – we argue – also in the context of a “globalised” world economy.Institutions, development, industrial policies, technological catching-up, trade specialisations.

    Paradoxes of Digital Antitrust

    Get PDF

    Panoramic Background Modeling for PTZ Cameras with Competitive Learning Neural Networks

    Get PDF
    The construction of a model of the background of a scene still remains as a challenging task in video surveillance systems, in particular for moving cameras. This work presents a novel approach for constructing a panoramic background model based on competitive learning neural networks and a subsequent piecewise linear interpolation by Delaunay triangulation. The approach can handle arbitrary camera directions and zooms for a Pan-Tilt-Zoom (PTZ) camera-based surveillance system. After testing the proposed approach on several indoor sequences, the results demonstrate that the proposed method is effective and suitable to use for real-time video surveillance applications.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    v. 81, issue 17, April 2, 2014

    Get PDF

    Magnitude Sensitive Competitive Neural Networks

    Get PDF
    En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un término de magnitud como un factor de modulación de la distancia usada en la competición. Al igual que otros métodos competitivos, MSCNNs realizan la cuantización vectorial de los datos, pero el término de magnitud guía el entrenamiento de los centroides de modo que se representan con alto detalle las zonas deseadas, definidas por la magnitud. Estas redes se han comparado con otros algoritmos de cuantización vectorial en diversos ejemplos de interpolación, reducción de color, modelado de superficies, clasificación, y varios ejemplos sencillos de demostración. Además se introduce un nuevo algoritmo de compresión de imágenes, MSIC (Magnitude Sensitive Image Compression), que hace uso de los algoritmos mencionados previamente, y que consigue una compresión de la imagen variable según una magnitud definida por el usuario. Los resultados muestran que las nuevas redes neuronales MSCNNs son más versátiles que otros algoritmos de aprendizaje competitivo, y presentan una clara mejora en cuantización vectorial sobre ellos cuando el dato está sopesado por una magnitud que indica el ¿interés¿ de cada muestra

    DDoS Attacks Detection Method Using Feature Importance and Support Vector Machine

    Get PDF
    In this study, the author wants to prove the combination of feature importance and support vector machine relevant to detecting distributed denial-of-service attacks. A distributed denial-of-service attack is a very dangerous type of attack because it causes enormous losses to the victim server. The study begins with determining network traffic features, followed by collecting datasets. The author uses 1000 randomly selected network traffic datasets for the purposes of feature selection and modeling. In the next stage, feature importance is used to select relevant features as modeling inputs based on support vector machine algorithms. The modeling results were evaluated using a confusion matrix table. Based on the evaluation using the confusion matrix, the score for the recall is 93 percent, precision is 95 percent, and accuracy is 92 percent. The author also compares the proposed method to several other methods. The comparison results show the performance of the proposed method is at a fairly good level in detecting distributed denial-of-service attacks. We realized this result was influenced by many factors, so further studies are needed in the future
    corecore