2,231 research outputs found

    Self-Organizing Map with False Neighbor Degree between Neurons for Effective Self-Organization

    Get PDF
    In the real world, it is not always true that the nextdoor house is close to my house, in other words, "neighbors" are not always "true neighbors". In this study, we propose a new Self-Organizing Map (SOM) algorithm, SOM with False Neighbor degree between neurons (called FN-SOM). The behavior of FN-SOM is investigated with learning for various input data. We confirm that FN-SOM can obtain the more effective map reflecting the distribution state of input data than the conventional SOM and Growing Grid

    Configuring the radial basis function neural network

    Get PDF
    The most important factor in configuring an optimum radial basis function (RBF) network is the training of neural units in the hidden layer. Many algorithms have been proposed, e.g., competitive learning (CL), to train the hidden units. CL suffers from producing dead-units. The other major factor Which was ignored in the past is the appropriate selection of the number of neural units in the hidden layer. The frequency sensitive competitive learning (FSCL) algorithm was proposed to alleviate the problem of dead-units, but it does not alleviate the latter problem. The rival penalized competitive learning (RPCL) algorithm is an improved version of the FSCL algorithm, which does solve the latter problem provided that a larger number of initial neural units are assigned. It is, however, very sensitive to the learning rate. This thesis proposes a new algorithm called the scattering-based clustering (SBC) algorithm, in which the FSCL algorithm is first applied to let the neural units converge. Then scatter matrices of the clustered data are used to compute the sphericity for each k, where k is the number of clusters. The optimum number of neural units to be used in the hidden layer is then obtained. The properties of the scatter matrices and sphericity are analytically discussed. A comparative study is done among different learning algorithms on training the RBF network. The result shows that the SBC algorithm outperforms the others

    A DSRPCL-SVM Approach to Informative Gene Analysis

    Get PDF
    Microarray data based tumor diagnosis is a very interesting topic in bioinformatics. One of the key problems is the discovery and analysis of informative genes of a tumor. Although there are many elaborate approaches to this problem, it is still difficult to select a reasonable set of informative genes for tumor diagnosis only with microarray data. In this paper, we classify the genes expressed through microarray data into a number of clusters via the distance sensitive rival penalized competitive learning (DSRPCL) algorithm and then detect the informative gene cluster or set with the help of support vector machine (SVM). Moreover, the critical or powerful informative genes can be found through further classifications and detections on the obtained informative gene clusters. It is well demonstrated by experiments on the colon, leukemia, and breast cancer datasets that our proposed DSRPCL-SVM approach leads to a reasonable selection of informative genes for tumor diagnosis

    A sequential algorithm for training the SOM prototypes based on higher-order recursive equations

    Get PDF
    A novel training algorithm is proposed for the formation of Self-Organizing Maps (SOM). In the proposed model, the weights are updated incrementally by using a higher-order difference equation, which implements a low-pass digital filter. It is possible to improve selected features of the self-organization process with respect to the basic SOM by suitably designing the filter. Moreover, from this model, new visualization tools can be derived for cluster visualization and for monitoring the quality of the map

    Combining Multiple Clusterings via Crowd Agreement Estimation and Multi-Granularity Link Analysis

    Full text link
    The clustering ensemble technique aims to combine multiple clusterings into a probably better and more robust clustering and has been receiving an increasing attention in recent years. There are mainly two aspects of limitations in the existing clustering ensemble approaches. Firstly, many approaches lack the ability to weight the base clusterings without access to the original data and can be affected significantly by the low-quality, or even ill clusterings. Secondly, they generally focus on the instance level or cluster level in the ensemble system and fail to integrate multi-granularity cues into a unified model. To address these two limitations, this paper proposes to solve the clustering ensemble problem via crowd agreement estimation and multi-granularity link analysis. We present the normalized crowd agreement index (NCAI) to evaluate the quality of base clusterings in an unsupervised manner and thus weight the base clusterings in accordance with their clustering validity. To explore the relationship between clusters, the source aware connected triple (SACT) similarity is introduced with regard to their common neighbors and the source reliability. Based on NCAI and multi-granularity information collected among base clusterings, clusters, and data instances, we further propose two novel consensus functions, termed weighted evidence accumulation clustering (WEAC) and graph partitioning with multi-granularity link analysis (GP-MGLA) respectively. The experiments are conducted on eight real-world datasets. The experimental results demonstrate the effectiveness and robustness of the proposed methods.Comment: The MATLAB source code of this work is available at: https://www.researchgate.net/publication/28197031

    Institutions and Policies Shaping Industrial Development: An Introductory Note

    Get PDF
    In this work, meant as an introduction to the contributions of the task force on Industrial Policies and Development, Initiative for Policy Dialogue, Columbia University, New York, we discuss the role of institutions and policies in the process of development. We begin by arguing how misleading the "market failure" language can be in order to assess the necessity of public policies in that it evaluates it against a yardstick that is hardly met by any observed market set-up. Much nearer to the empirical evidence we argue that even when one encounters a prevailing market form of governance of economic interactions, the latter are embedded in a rich thread of non-market institutions. This applies in general and is particularly so with respect to the production and use of information and technological knowledge. In this work we build on the fundamental institutional embeddedness of such processes of technological learning in both developed and catching-up countries and we try to identify some quite robust policy ingredients which have historically accompanied the co-evolution between technological capabilities, forms of corporate organisations and incentive structures. All experiences of successful catching-up and sometimes overtaking the incumbent economic leaders – starting with the USA vis-à-vis Britain – have involved “institution building” and policy measures affecting technological imitation, the organisations of industries, trade patterns and intellectual property rights. This is likely to apply today, too, – we argue – also in the context of a “globalised” world economy.Institutions, development, industrial policies, technological catching-up, trade specialisations.

    A Multimodel Approach for Complex Systems Modeling based on Classification Algorithms

    Get PDF
    In this paper, a new multimodel approach for complex systems modeling based on classification algorithms is presented. It requires firstly the determination of the model-base. For this, the number of models is selected via a neural network and a rival penalized competitive learning (RPCL), and the operating clusters are identified by using the fuzzy K-means algorithm. The obtained results are then exploited for the parametric identification of the models. The second step consists in validating the proposed model-base by using the adequate method of validity computation. Two examples are presented in this paper which show the efficiency of the proposed approach

    Paradoxes of Digital Antitrust

    Get PDF

    Panoramic Background Modeling for PTZ Cameras with Competitive Learning Neural Networks

    Get PDF
    The construction of a model of the background of a scene still remains as a challenging task in video surveillance systems, in particular for moving cameras. This work presents a novel approach for constructing a panoramic background model based on competitive learning neural networks and a subsequent piecewise linear interpolation by Delaunay triangulation. The approach can handle arbitrary camera directions and zooms for a Pan-Tilt-Zoom (PTZ) camera-based surveillance system. After testing the proposed approach on several indoor sequences, the results demonstrate that the proposed method is effective and suitable to use for real-time video surveillance applications.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    v. 81, issue 17, April 2, 2014

    Get PDF
    • …
    corecore