1,534 research outputs found

    Impact of noise on a dynamical system: prediction and uncertainties from a swarm-optimized neural network

    Get PDF
    In this study, an artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey--Glass chaotic time series in the short-term x(t+6)x(t+6). The performance prediction was evaluated and compared with another studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called {\it stochastic} hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute uncertainties of predictions for noisy Mackey--Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σN\sigma_{N}) from 0.01 to 0.1.Comment: 11 pages, 8 figure

    Identification of nonlinear time-varying systems using an online sliding-window and common model structure selection (CMSS) approach with applications to EEG

    Get PDF
    The identification of nonlinear time-varying systems using linear-in-the-parameter models is investigated. A new efficient Common Model Structure Selection (CMSS) algorithm is proposed to select a common model structure. The main idea and key procedure is: First, generate K 1 data sets (the first K data sets are used for training, and theK 1 th one is used for testing) using an online sliding window method; then detect significant model terms to form a common model structure which fits over all the K training data sets using the new proposed CMSS approach. Finally, estimate and refine the time-varying parameters for the identified common-structured model using a Recursive Least Squares (RLS) parameter estimation method. The new method can effectively detect and adaptively track the transient variation of nonstationary signals. Two examples are presented to illustrate the effectiveness of the new approach including an application to an EEG data set

    Forecasting high waters at Venice Lagoon using chaotic time series analisys and nonlinear neural netwoks

    Get PDF
    Time series analysis using nonlinear dynamics systems theory and multilayer neural networks models have been applied to the time sequence of water level data recorded every hour at 'Punta della Salute' from Venice Lagoon during the years 1980-1994. The first method is based on the reconstruction of the state space attractor using time delay embedding vectors and on the characterisation of invariant properties which define its dynamics. The results suggest the existence of a low dimensional chaotic attractor with a Lyapunov dimension, DL, of around 6.6 and a predictability between 8 and 13 hours ahead. Furthermore, once the attractor has been reconstructed it is possible to make predictions by mapping local-neighbourhood to local-neighbourhood in the reconstructed phase space. To compare the prediction results with another nonlinear method, two nonlinear autoregressive models (NAR) based on multilayer feedforward neural networks have been developed. From the study, it can be observed that nonlinear forecasting produces adequate results for the 'normal' dynamic behaviour of the water level of Venice Lagoon, outperforming linear algorithms, however, both methods fail to forecast the 'high water' phenomenon more than 2-3 hours ahead.Publicad

    Improved model identification for nonlinear systems using a random subsampling and multifold modelling (RSMM) approach

    Get PDF
    In nonlinear system identification, the available observed data are conventionally partitioned into two parts: the training data that are used for model identification and the test data that are used for model performance testing. This sort of ‘hold-out’ or ‘split-sample’ data partitioning method is convenient and the associated model identification procedure is in general easy to implement. The resultant model obtained from such a once-partitioned single training dataset, however, may occasionally lack robustness and generalisation to represent future unseen data, because the performance of the identified model may be highly dependent on how the data partition is made. To overcome the drawback of the hold-out data partitioning method, this study presents a new random subsampling and multifold modelling (RSMM) approach to produce less biased or preferably unbiased models. The basic idea and the associated procedure are as follows. Firstly, generate K training datasets (and also K validation datasets), using a K-fold random subsampling method. Secondly, detect significant model terms and identify a common model structure that fits all the K datasets using a new proposed common model selection approach, called the multiple orthogonal search algorithm. Finally, estimate and refine the model parameters for the identified common-structured model using a multifold parameter estimation method. The proposed method can produce robust models with better generalisation performance

    Improved model identification for non-linear systems using a random subsampling and multifold modelling (RSMM) approach

    Get PDF
    In non-linear system identification, the available observed data are conventionally partitioned into two parts: the training data that are used for model identification and the test data that are used for model performance testing. This sort of 'hold-out' or 'split-sample' data partitioning method is convenient and the associated model identification procedure is in general easy to implement. The resultant model obtained from such a once-partitioned single training dataset, however, may occasionally lack robustness and generalisation to represent future unseen data, because the performance of the identified model may be highly dependent on how the data partition is made. To overcome the drawback of the hold-out data partitioning method, this study presents a new random subsampling and multifold modelling (RSMM) approach to produce less biased or preferably unbiased models. The basic idea and the associated procedure are as follows. First, generate K training datasets (and also K validation datasets), using a K-fold random subsampling method. Secondly, detect significant model terms and identify a common model structure that fits all the K datasets using a new proposed common model selection approach, called the multiple orthogonal search algorithm. Finally, estimate and refine the model parameters for the identified common-structured model using a multifold parameter estimation method. The proposed method can produce robust models with better generalisation performance

    Data-Driven Forecasting of High-Dimensional Chaotic Systems with Long Short-Term Memory Networks

    Full text link
    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.Comment: 31 page

    Applications of nonlinear filters with the linear-in-the-parameter structure

    Get PDF

    Function approximation in high-dimensional spaces using lower-dimensional Gaussian RBF networks.

    Get PDF
    by Jones Chui.Thesis (M.Phil.)--Chinese University of Hong Kong, 1992.Includes bibliographical references (leaves 62-[66]).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Fundamentals of Artificial Neural Networks --- p.2Chapter 1.1.1 --- Processing Unit --- p.2Chapter 1.1.2 --- Topology --- p.3Chapter 1.1.3 --- Learning Rules --- p.4Chapter 1.2 --- Overview of Various Neural Network Models --- p.6Chapter 1.3 --- Introduction to the Radial Basis Function Networks (RBFs) --- p.8Chapter 1.3.1 --- Historical Development --- p.9Chapter 1.3.2 --- Some Intrinsic Problems --- p.9Chapter 1.4 --- Objective of the Thesis --- p.10Chapter 2 --- Low-dimensional Gaussian RBF networks (LowD RBFs) --- p.13Chapter 2.1 --- Architecture of LowD RBF Networks --- p.13Chapter 2.1.1 --- Network Structure --- p.13Chapter 2.1.2 --- Learning Rules --- p.17Chapter 2.2 --- Construction of LowD RBF Networks --- p.19Chapter 2.2.1 --- Growing Heuristic --- p.19Chapter 2.2.2 --- Pruning Heuristic --- p.27Chapter 2.2.3 --- Summary --- p.31Chapter 3 --- Application examples --- p.34Chapter 3.1 --- Chaotic Time Series Prediction --- p.35Chapter 3.1.1 --- Performance Comparison --- p.39Chapter 3.1.2 --- Sensitivity Analysis of MSE THRESHOLDS --- p.41Chapter 3.1.3 --- Effects of Increased Embedding Dimension --- p.41Chapter 3.1.4 --- Comparison with Tree-Structured Network --- p.46Chapter 3.1.5 --- Overfitting Problem --- p.46Chapter 3.2 --- Nonlinear prediction of speech signal --- p.49Chapter 3.2.1 --- Comparison with Linear Predictive Coding (LPC) --- p.54Chapter 3.2.2 --- Performance Test in Noisy Conditions --- p.55Chapter 3.2.3 --- Iterated Prediction of Speech --- p.59Chapter 4 --- Conclusion --- p.60Chapter 4.1 --- Discussions --- p.60Chapter 4.2 --- Limitations and Suggestions for Further Research --- p.61Bibliography --- p.6
    • …
    corecore