70 research outputs found

    Configuring the radial basis function neural network

    Get PDF
    The most important factor in configuring an optimum radial basis function (RBF) network is the training of neural units in the hidden layer. Many algorithms have been proposed, e.g., competitive learning (CL), to train the hidden units. CL suffers from producing dead-units. The other major factor Which was ignored in the past is the appropriate selection of the number of neural units in the hidden layer. The frequency sensitive competitive learning (FSCL) algorithm was proposed to alleviate the problem of dead-units, but it does not alleviate the latter problem. The rival penalized competitive learning (RPCL) algorithm is an improved version of the FSCL algorithm, which does solve the latter problem provided that a larger number of initial neural units are assigned. It is, however, very sensitive to the learning rate. This thesis proposes a new algorithm called the scattering-based clustering (SBC) algorithm, in which the FSCL algorithm is first applied to let the neural units converge. Then scatter matrices of the clustered data are used to compute the sphericity for each k, where k is the number of clusters. The optimum number of neural units to be used in the hidden layer is then obtained. The properties of the scatter matrices and sphericity are analytically discussed. A comparative study is done among different learning algorithms on training the RBF network. The result shows that the SBC algorithm outperforms the others

    Investigations on number selection for finite mixture models and clustering analysis.

    Get PDF
    by Yiu Ming Cheung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 92-99).Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.1.1 --- Bayesian YING-YANG Learning Theory and Number Selec- tion Criterion --- p.5Chapter 1.2 --- General Motivation --- p.6Chapter 1.3 --- Contributions of the Thesis --- p.6Chapter 1.4 --- Other Related Contributions --- p.7Chapter 1.4.1 --- A Fast Number Detection Approach --- p.7Chapter 1.4.2 --- Application of RPCL to Prediction Models for Time Series Forecasting --- p.7Chapter 1.4.3 --- Publications --- p.8Chapter 1.5 --- Outline of the Thesis --- p.8Chapter 2 --- Open Problem: How Many Clusters? --- p.11Chapter 3 --- Bayesian YING-YANG Learning Theory: Review and Experiments --- p.17Chapter 3.1 --- Briefly Review of Bayesian YING-YANG Learning Theory --- p.18Chapter 3.2 --- Number Selection Criterion --- p.20Chapter 3.3 --- Experiments --- p.23Chapter 3.3.1 --- Experimental Purposes and Data Sets --- p.23Chapter 3.3.2 --- Experimental Results --- p.23Chapter 4 --- Conditions of Number Selection Criterion --- p.39Chapter 4.1 --- Alternative Condition of Number Selection Criterion --- p.40Chapter 4.2 --- Conditions of Special Hard-cut Criterion --- p.45Chapter 4.2.1 --- Criterion Conditions in Two-Gaussian Case --- p.45Chapter 4.2.2 --- Criterion Conditions in k*-Gaussian Case --- p.59Chapter 4.3 --- Experimental Results --- p.60Chapter 4.3.1 --- Purpose and Data Sets --- p.60Chapter 4.3.2 --- Experimental Results --- p.63Chapter 4.4 --- Discussion --- p.63Chapter 5 --- Application of Number Selection Criterion to Data Classification --- p.80Chapter 5.1 --- Unsupervised Classification --- p.80Chapter 5.1.1 --- Experiments --- p.81Chapter 5.2 --- Supervised Classification --- p.82Chapter 5.2.1 --- RBF Network --- p.85Chapter 5.2.2 --- Experiments --- p.86Chapter 6 --- Conclusion and Future Work --- p.89Chapter 6.1 --- Conclusion --- p.89Chapter 6.2 --- Future Work --- p.90Bibliography --- p.92Chapter A --- A Number Detection Approach for Equal-and-Isotropic Variance Clusters --- p.100Chapter A.1 --- Number Detection Approach --- p.100Chapter A.2 --- Demonstration Experiments --- p.102Chapter A.3 --- Remarks --- p.105Chapter B --- RBF Network with RPCL Approach --- p.106Chapter B.l --- Introduction --- p.106Chapter B.2 --- Normalized RBF net and Extended Normalized RBF Net --- p.108Chapter B.3 --- Demonstration --- p.110Chapter B.4 --- Remarks --- p.113Chapter C --- Adaptive RPCL-CLP Model for Financial Forecasting --- p.114Chapter C.1 --- Introduction --- p.114Chapter C.2 --- Extraction of Input Patterns and Outputs --- p.115Chapter C.3 --- RPCL-CLP Model --- p.116Chapter C.3.1 --- RPCL-CLP Architecture --- p.116Chapter C.3.2 --- Training Stage of RPCL-CLP --- p.117Chapter C.3.3 --- Prediction Stage of RPCL-CLP --- p.122Chapter C.4 --- Adaptive RPCL-CLP Model --- p.122Chapter C.4.1 --- Data Pre-and-Post Processing --- p.122Chapter C.4.2 --- Architecture and Implementation --- p.122Chapter C.5 --- Computer Experiments --- p.125Chapter C.5.1 --- Data Sets and Experimental Purpose --- p.125Chapter C.5.2 --- Experimental Results --- p.126Chapter C.6 --- Conclusion --- p.134Chapter D --- Publication List --- p.135Chapter D.1 --- Publication List --- p.13

    Estimation of biochemical variables using quantumbehaved particle swarm optimization (QPSO)-trained radius basis function neural network: A case study of fermentation process of L-glutamic acid

    Get PDF
    Due to the difficulties in the measurement of biochemical variables in fermentation process, softsensing model based on radius basis function neural network had been established for estimating the variables. To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved particle swarm optimization (QPSO) algorithm for neural network training. The experiment results of L-glutamic acid fermentation process showed that our established estimator could predict variables such as the concentrations of glucose, biomass and glutamic acid with higher accuracy than the estimator trained by the most widely used orthogonal least squares (OLS). According to its global convergence, QPSO generated a group of more proper network parameters than the most popular OLS. Thus, QPSO-RBF estimator was more favorable to the control and fault diagnosis of the fermentation process, and consequently, it increased the yield of fermentation.Key words: Soft-sensing model, quantum-behaved particle swarm optimization algorithm, neural network

    Accurate Wavelet Neural Network for Efficient Controlling of an Active Magnetic Bearing System

    Get PDF

    Maximum weighted likelihood via rival penalized EM for density mixture clustering with automatic model selection

    Full text link

    Combining Multiple Clusterings via Crowd Agreement Estimation and Multi-Granularity Link Analysis

    Full text link
    The clustering ensemble technique aims to combine multiple clusterings into a probably better and more robust clustering and has been receiving an increasing attention in recent years. There are mainly two aspects of limitations in the existing clustering ensemble approaches. Firstly, many approaches lack the ability to weight the base clusterings without access to the original data and can be affected significantly by the low-quality, or even ill clusterings. Secondly, they generally focus on the instance level or cluster level in the ensemble system and fail to integrate multi-granularity cues into a unified model. To address these two limitations, this paper proposes to solve the clustering ensemble problem via crowd agreement estimation and multi-granularity link analysis. We present the normalized crowd agreement index (NCAI) to evaluate the quality of base clusterings in an unsupervised manner and thus weight the base clusterings in accordance with their clustering validity. To explore the relationship between clusters, the source aware connected triple (SACT) similarity is introduced with regard to their common neighbors and the source reliability. Based on NCAI and multi-granularity information collected among base clusterings, clusters, and data instances, we further propose two novel consensus functions, termed weighted evidence accumulation clustering (WEAC) and graph partitioning with multi-granularity link analysis (GP-MGLA) respectively. The experiments are conducted on eight real-world datasets. The experimental results demonstrate the effectiveness and robustness of the proposed methods.Comment: The MATLAB source code of this work is available at: https://www.researchgate.net/publication/28197031

    Forced Information for Information-Theoretic Competitive Learning

    Get PDF

    A Clustering Method for Data in Cylindrical Coordinates

    Get PDF
    We propose a new clustering method for data in cylindrical coordinates based on the k-means. The goal of the k-means family is to maximize an optimization function, which requires a similarity. Thus, we need a new similarity to obtain the new clustering method for data in cylindrical coordinates. In this study, we first derive a new similarity for the new clustering method by assuming a particular probabilistic model. A data point in cylindrical coordinates has radius, azimuth, and height. We assume that the azimuth is sampled from a von Mises distribution and the radius and the height are independently generated from isotropic Gaussian distributions. We derive the new similarity from the log likelihood of the assumed probability distribution. Our experiments demonstrate that the proposed method using the new similarity can appropriately partition synthetic data defined in cylindrical coordinates. Furthermore, we apply the proposed method to color image quantization and show that the methods successfully quantize a color image with respect to the hue element
    • …
    corecore