10,172 research outputs found

    Wavelet Neural Networks: A Practical Guide

    Get PDF
    Wavelet networks (WNs) are a new class of networks which have been used with great success in a wide range of application. However a general accepted framework for applying WNs is missing from the literature. In this study, we present a complete statistical model identification framework in order to apply WNs in various applications. The following subjects were thorough examined: the structure of a WN, training methods, initialization algorithms, variable significance and variable selection algorithms, model selection methods and finally methods to construct confidence and prediction intervals. In addition the complexity of each algorithm is discussed. Our proposed framework was tested in two simulated cases, in one chaotic time series described by the Mackey-Glass equation and in three real datasets described by daily temperatures in Berlin, daily wind speeds in New York and breast cancer classification. Our results have shown that the proposed algorithms produce stable and robust results indicating that our proposed framework can be applied in various applications

    The performance of two mothers wavelets in function approximation.

    Get PDF
    Research into Wavelet Neural Networks was conducted on numerous occasions in the past. Based on previous research, it was noted that the Wavelet Neural Network could reliably be used for function approximation. The research conducted included comparisons between the mother functions of the Wavelet Neural Network namely the Mexican Hat, Gaussian Wavelet and Morlet Functions. The performances of these functions were estimated using the Normalised Square Root Mean Squared Error (NSRMSE) performance index. However, in this paper, the Root Mean Squared Error (RMSE) was used as the performance index. In previous research, two of the best mother wavelets for function approximations were determined to be the Gaussian Wavelet and Morlet functions. An in-depth investigation into the two functions was conducted in order to determine which of these two functions performed better under certain conditions. Simulations involving one-dimension and two-dimension were done using both functions. In this paper, we can make a specifically interpretation that Gaussian Wavelet can be used for approximating function for the function domain [−1, 1]. While Morlet function can be used for big domain. All simulations were done using Matlab V6.5

    Lattice dynamical wavelet neural networks implemented using particle swarm optimisation for spatio-temporal system identification

    Get PDF
    Starting from the basic concept of coupled map lattices, a new family of adaptive wavelet neural networks, called lattice dynamical wavelet neural networks (LDWNN), is introduced for spatiotemporal system identification, by combining an efficient wavelet representation with a coupled map lattice model. A new orthogonal projection pursuit (OPP) method, coupled with a particle swarm optimisation (PSO) algorithm, is proposed for augmenting the proposed network. A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model. In the first stage, by applying the orthogonal projection pursuit algorithm, significant wavelet-neurons are adaptively and successively recruited into the network, where adjustable parameters of the associated waveletneurons are optimised using a particle swarm optimiser. The resultant network model, obtained in the first stage, may however be redundant. In the second stage, an orthogonal least squares (OLS) algorithm is then applied to refine and improve the initially trained network by removing redundant wavelet-neurons from the network. The proposed two-stage hybrid training procedure can generally produce a parsimonious network model, where a ranked list of wavelet-neurons, according to the capability of each neuron to represent the total variance in the system output signal is produced. Two spatio-temporal system identification examples are presented to demonstrate the performance of the proposed new modelling framework

    A unified wavelet-based modelling framework for non-linear system identification: the WANARX model structure

    Get PDF
    A new unified modelling framework based on the superposition of additive submodels, functional components, and wavelet decompositions is proposed for non-linear system identification. A non-linear model, which is often represented using a multivariate non-linear function, is initially decomposed into a number of functional components via the wellknown analysis of variance (ANOVA) expression, which can be viewed as a special form of the NARX (non-linear autoregressive with exogenous inputs) model for representing dynamic input–output systems. By expanding each functional component using wavelet decompositions including the regular lattice frame decomposition, wavelet series and multiresolution wavelet decompositions, the multivariate non-linear model can then be converted into a linear-in-theparameters problem, which can be solved using least-squares type methods. An efficient model structure determination approach based upon a forward orthogonal least squares (OLS) algorithm, which involves a stepwise orthogonalization of the regressors and a forward selection of the relevant model terms based on the error reduction ratio (ERR), is employed to solve the linear-in-the-parameters problem in the present study. The new modelling structure is referred to as a wavelet-based ANOVA decomposition of the NARX model or simply WANARX model, and can be applied to represent high-order and high dimensional non-linear systems

    A new class of wavelet networks for nonlinear system identification

    Get PDF
    A new class of wavelet networks (WNs) is proposed for nonlinear system identification. In the new networks, the model structure for a high-dimensional system is chosen to be a superimposition of a number of functions with fewer variables. By expanding each function using truncated wavelet decompositions, the multivariate nonlinear networks can be converted into linear-in-the-parameter regressions, which can be solved using least-squares type methods. An efficient model term selection approach based upon a forward orthogonal least squares (OLS) algorithm and the error reduction ratio (ERR) is applied to solve the linear-in-the-parameters problem in the present study. The main advantage of the new WN is that it exploits the attractive features of multiscale wavelet decompositions and the capability of traditional neural networks. By adopting the analysis of variance (ANOVA) expansion, WNs can now handle nonlinear identification problems in high dimensions

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    Phase Harmonic Correlations and Convolutional Neural Networks

    Full text link
    A major issue in harmonic analysis is to capture the phase dependence of frequency representations, which carries important signal properties. It seems that convolutional neural networks have found a way. Over time-series and images, convolutional networks often learn a first layer of filters which are well localized in the frequency domain, with different phases. We show that a rectifier then acts as a filter on the phase of the resulting coefficients. It computes signal descriptors which are local in space, frequency and phase. The non-linear phase filter becomes a multiplicative operator over phase harmonics computed with a Fourier transform along the phase. We prove that it defines a bi-Lipschitz and invertible representation. The correlations of phase harmonics coefficients characterise coherent structures from their phase dependence across frequencies. For wavelet filters, we show numerically that signals having sparse wavelet coefficients can be recovered from few phase harmonic correlations, which provide a compressive representationComment: 26 pages, 8 figure
    corecore