1,771 research outputs found

    ARCHITECTURE OPTIMIZATION, TRAINING CONVERGENCE AND NETWORK ESTIMATION ROBUSTNESS OF A FULLY CONNECTED RECURRENT NEURAL NETWORK

    Get PDF
    Recurrent neural networks (RNN) have been rapidly developed in recent years. Applications of RNN can be found in system identification, optimization, image processing, pattern reorganization, classification, clustering, memory association, etc. In this study, an optimized RNN is proposed to model nonlinear dynamical systems. A fully connected RNN is developed first which is modified from a fully forward connected neural network (FFCNN) by accommodating recurrent connections among its hidden neurons. In addition, a destructive structure optimization algorithm is applied and the extended Kalman filter (EKF) is adopted as a network\u27s training algorithm. These two algorithms can seamlessly work together to generate the optimized RNN. The enhancement of the modeling performance of the optimized network comes from three parts: 1) its prototype - the FFCNN has advantages over multilayer perceptron network (MLP), the most widely used network, in terms of modeling accuracy and generalization ability; 2) the recurrency in RNN network make it more capable of modeling non-linear dynamical systems; and 3) the structure optimization algorithm further improves RNN\u27s modeling performance in generalization ability and robustness. Performance studies of the proposed network are highlighted in training convergence and robustness. For the training convergence study, the Lyapunov method is used to adapt some training parameters to guarantee the training convergence, while the maximum likelihood method is used to estimate some other parameters to accelerate the training process. In addition, robustness analysis is conducted to develop a robustness measure considering uncertainties propagation through RNN via unscented transform. Two case studies, the modeling of a benchmark non-linear dynamical system and a tool wear progression in hard turning, are carried out to testify the development in this dissertation. The work detailed in this dissertation focuses on the creation of: (1) a new method to prove/guarantee the training convergence of RNN, and (2) a new method to quantify the robustness of RNN using uncertainty propagation analysis. With the proposed study, RNN and related algorithms are developed to model nonlinear dynamical system which can benefit modeling applications such as the condition monitoring studies in terms of robustness and accuracy in the future

    Time Series Prediction with a Weighted Bidirectional Multi-Stream Extended Kalman Filter

    Get PDF
    This paper describes the use of a multi-stream extended Kalman filter (EKF) to tackle the IJCNN 2004 challenge problem - time series prediction on CATS benchmark. A weighted bidirectional approach was adopted in the experiments to incorporate the forward and backward predictions of the time series. EKF is a practical, general approach to neural networks training. It consists of the following: 1) gradient calculation by backpropagation through time (BPTT); 2) weight updates based on the extended Kalman filter; and 3) data presentation using multi-stream mechanics

    Aircraft Parameter Estimation using Feedforward Neural Networks With Lyapunov Stability Analysis

    Get PDF
    Aerodynamic parameter estimation is critical in the aviation sector, especially in design and development programs of defense-military aircraft. In this paper, new results of the application of Artificial Neural Networks (ANN) to the field of aircraft parameter estimation are presented. The performances of Feedforward Neural Network (FFNN) with Backpropagation and FFNN with Backpropagation using Recursive Least Square (RLS) are investigated for aerodynamic parameter estimation. The methods are validated on flight data simulated using MATLAB implementations. The normalized Lyapunov energy functional has been used to derive the convergence conditions for both the ANN-based estimation algorithms. The estimation results are compared on the basis of performance metrics and computation time. The performance of FFNN-RLS has been observed to be approximately 10% better than FFNN-BPN. Simulation results from both algorithms have been found to be highly satisfactory and pave the way for further applications to real flight test data
    • …
    corecore