1,628 research outputs found

    A Dynamic Parameter Tuning Algorithm For Rbf Neural Networks

    Get PDF
    The objective of this thesis is to present a methodology for fine-tuning the parameters of radial basis function (RBF) neural networks, thus improving their performance. Three main parameters affect the performance of an RBF network. They are the centers and widths of the RBF nodes and the weights associated with each node. A gridded center and orthogonal search algorithm have been used to initially determine the parameters of the RBF network. A parameter tuning algorithm has been developed to optimize these parameters and improve the performance of the RBF network. When necessary, the recursive least square solution may be used to include new nodes to the network architecture. To study the behavior of the proposed network, six months of real data at fifteen-minute intervals has been collected from a North American pulp and paper company. The data has been used to evaluate the performance of the proposed network in the approximation of the relationship between the optical properties of base sheet paper and the process variables. The experiments have been very successful and Pearson correlation coefficients of up to 0.98 have been obtained for the approximation. The objective of this thesis is to present a methodology for fine-tuning the parameters of radial basis function (RBF) neural networks, thus improving their performance. Three main parameters affect the performance of an RBF network. They are the centers and widths of the RBF nodes and the weights associated with each node. A gridded center and orthogonal search algorithm have been used to initially determine the parameters of the RBF network. A parameter tuning algorithm has been developed to optimize these parameters and improve the performance of the RBF network. When necessary, the recursive least square solution may be used to include new nodes to the network architecture. To study the behavior of the proposed network, six months of real data at fifteen-minute intervals has been collected from a North American pulp and paper company. The data has been used to evaluate the performance of the proposed network in the approximation of the relationship between the optical properties of base sheet paper and the process variables. The experiments have been very successful and Pearson correlation coefficients of up to 0.98 have been obtained for the approximation. The objective of this thesis is to present a methodology for fine-tuning the parameters of radial basis function (RBF) neural networks, thus improving their performance. Three main parameters affect the performance of an RBF network. They are the centers and widths of the RBF nodes and the weights associated with each node. A gridded center and orthogonal search algorithm have been used to initially determine the parameters of the RBF network. A parameter tuning algorithm has been developed to optimize these parameters and improve the performance of the RBF network. When necessary, the recursive least square solution may be used to include new nodes to the network architecture. To study the behavior of the proposed network, six months of real data at fifteen-minute intervals has been collected from a North American pulp and paper company. The data has been used to evaluate the performance of the proposed network in the approximation of the relationship between the optical properties of base sheet paper and the process variables. The experiments have been very successful and Pearson correlation coefficients of up to 0.98 have been obtained for the approximation

    Identification of nonlinear time-varying systems using an online sliding-window and common model structure selection (CMSS) approach with applications to EEG

    Get PDF
    The identification of nonlinear time-varying systems using linear-in-the-parameter models is investigated. A new efficient Common Model Structure Selection (CMSS) algorithm is proposed to select a common model structure. The main idea and key procedure is: First, generate K 1 data sets (the first K data sets are used for training, and theK 1 th one is used for testing) using an online sliding window method; then detect significant model terms to form a common model structure which fits over all the K training data sets using the new proposed CMSS approach. Finally, estimate and refine the time-varying parameters for the identified common-structured model using a Recursive Least Squares (RLS) parameter estimation method. The new method can effectively detect and adaptively track the transient variation of nonstationary signals. Two examples are presented to illustrate the effectiveness of the new approach including an application to an EEG data set

    Adaptive structure radial basis function network model for processes with operating region migration

    Get PDF
    An adaptive structure radial basis function (RBF) network model is proposed in this paper to model nonlinear processes with operating region migration. The recursive orthogonal least squares algorithm is adopted to select new centers on-line, as well as to train the network weights. Based on the R matrix in the orthogonal decomposition, an initial center bank is formed and updated in each sample period. A new learning strategy is proposed to gain information from the new data for network structure adaptation. A center grouping algorithm is also developed to divide the centers into active and non-active groups, so that a structure with a smaller size is maintained in the final network model. The proposed RBF model is evaluated and compared to the two fixed-structure RBF networks by modeling a nonlinear time-varying numerical example. The results demonstrate that the proposed adaptive structure model is capable of adapting its structure to fit the operating region of the process effectively with a more compact structure and it significantly outperforms the two fixed structure RBF models

    Learning enhancement of radial basis function network with particle swarm optimization

    Get PDF
    Back propagation (BP) algorithm is the most common technique in Artificial Neural Network (ANN) learning, and this includes Radial Basis Function Network. However, major disadvantages of BP are its convergence rate is relatively slow and always being trapped at the local minima. To overcome this problem, Particle Swarm Optimization (PSO) has been implemented to enhance ANN learning to increase the performance of network in terms of convergence rate and accuracy. In Back Propagation Radial Basis Function Network (BP-RBFN), there are many elements to be considered. These include the number of input nodes, hidden nodes, output nodes, learning rate, bias, minimum error and activation/transfer functions. These elements will affect the speed of RBF Network learning. In this study, Particle Swarm Optimization (PSO) is incorporated into RBF Network to enhance the learning performance of the network. Two algorithms have been developed on error optimization for Back Propagation of Radial Basis Function Network (BP-RBFN) and Particle Swarm Optimization of Radial Basis Function Network (PSO-RBFN) to seek and generate better network performance. The results show that PSO-RBFN give promising outputs with faster convergence rate and better classifications compared to BP-RBFN

    Robust fault diagnosis for an exothermic semi-batch polymerization reactor under open-loop

    Get PDF
    An independent radial basis function neural network (RBFNN) is developed and employed here for an online diagnosis of actuator and sensor faults. In this research, a robust fault detection and isolation scheme is developed for an open-loop exothermic semi-batch polymerization reactor described by Chylla–Haase. The independent RBFNN is employed here for online diagnosis of faults when the system is subjected to system uncertainties and disturbances. Two different techniques to employ RBFNNs are investigated. Firstly, an independent neural network (NN) is used to model the reactor dynamics and generate residuals. Secondly, an additional RBFNN is developed as a classifier to isolate faults from the generated residuals. Three sensor faults and one actuator fault are simulated on the reactor. Moreover, many practical disturbances and system uncertainties, such as monomer feed rate, fouling factor, impurity factor, ambient temperature and measurement noise, are modelled. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method

    Dynamic non-linear system modelling using wavelet-based soft computing techniques

    Get PDF
    The enormous number of complex systems results in the necessity of high-level and cost-efficient modelling structures for the operators and system designers. Model-based approaches offer a very challenging way to integrate a priori knowledge into the procedure. Soft computing based models in particular, can successfully be applied in cases of highly nonlinear problems. A further reason for dealing with so called soft computational model based techniques is that in real-world cases, many times only partial, uncertain and/or inaccurate data is available. Wavelet-Based soft computing techniques are considered, as one of the latest trends in system identification/modelling. This thesis provides a comprehensive synopsis of the main wavelet-based approaches to model the non-linear dynamical systems in real world problems in conjunction with possible twists and novelties aiming for more accurate and less complex modelling structure. Initially, an on-line structure and parameter design has been considered in an adaptive Neuro- Fuzzy (NF) scheme. The problem of redundant membership functions and consequently fuzzy rules is circumvented by applying an adaptive structure. The growth of a special type of Fungus (Monascus ruber van Tieghem) is examined against several other approaches for further justification of the proposed methodology. By extending the line of research, two Morlet Wavelet Neural Network (WNN) structures have been introduced. Increasing the accuracy and decreasing the computational cost are both the primary targets of proposed novelties. Modifying the synoptic weights by replacing them with Linear Combination Weights (LCW) and also imposing a Hybrid Learning Algorithm (HLA) comprising of Gradient Descent (GD) and Recursive Least Square (RLS), are the tools utilised for the above challenges. These two models differ from the point of view of structure while they share the same HLA scheme. The second approach contains an additional Multiplication layer, plus its hidden layer contains several sub-WNNs for each input dimension. The practical superiority of these extensions is demonstrated by simulation and experimental results on real non-linear dynamic system; Listeria Monocytogenes survival curves in Ultra-High Temperature (UHT) whole milk, and consolidated with comprehensive comparison with other suggested schemes. At the next stage, the extended clustering-based fuzzy version of the proposed WNN schemes, is presented as the ultimate structure in this thesis. The proposed Fuzzy Wavelet Neural network (FWNN) benefitted from Gaussian Mixture Models (GMMs) clustering feature, updated by a modified Expectation-Maximization (EM) algorithm. One of the main aims of this thesis is to illustrate how the GMM-EM scheme could be used not only for detecting useful knowledge from the data by building accurate regression, but also for the identification of complex systems. The structure of FWNN is based on the basis of fuzzy rules including wavelet functions in the consequent parts of rules. In order to improve the function approximation accuracy and general capability of the FWNN system, an efficient hybrid learning approach is used to adjust the parameters of dilation, translation, weights, and membership. Extended Kalman Filter (EKF) is employed for wavelet parameters adjustment together with Weighted Least Square (WLS) which is dedicated for the Linear Combination Weights fine-tuning. The results of a real-world application of Short Time Load Forecasting (STLF) further re-enforced the plausibility of the above technique

    Efficient least angle regression for identification of linear-in-the-parameters models

    Get PDF
    Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm
    corecore