5 research outputs found

    MATLAB SIMULATION OF THE HYBRID OF RECURSIVE NEURAL DYNAMICS FOR ONLINE MATRIX INVERSION

    Get PDF
    A novel kind of a hybrid recursive neural implicit dynamics for real-time matrix inversion has been recently proposed and investigated. Our goal is to compare the hybrid recursive neural implicit dynamics on the one hand, and conventional explicit neural dynamics on the other hand. Simulation results show that the hybrid model can coincide better with systems in practice and has higher abilities in representing dynamic systems. More importantly, hybrid model can achieve superior convergence performance in comparison with the existing dynamic systems, specifically recently-proposed Zhang dynamics. This paper presents the Simulink model of a hybrid recursive neural implicit dynamics and gives a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model

    A feed forward neural network approach for matrix computations

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria

    Intelligent Transportation Related Complex Systems and Sensors

    Get PDF
    Building around innovative services related to different modes of transport and traffic management, intelligent transport systems (ITS) are being widely adopted worldwide to improve the efficiency and safety of the transportation system. They enable users to be better informed and make safer, more coordinated, and smarter decisions on the use of transport networks. Current ITSs are complex systems, made up of several components/sub-systems characterized by time-dependent interactions among themselves. Some examples of these transportation-related complex systems include: road traffic sensors, autonomous/automated cars, smart cities, smart sensors, virtual sensors, traffic control systems, smart roads, logistics systems, smart mobility systems, and many others that are emerging from niche areas. The efficient operation of these complex systems requires: i) efficient solutions to the issues of sensors/actuators used to capture and control the physical parameters of these systems, as well as the quality of data collected from these systems; ii) tackling complexities using simulations and analytical modelling techniques; and iii) applying optimization techniques to improve the performance of these systems. It includes twenty-four papers, which cover scientific concepts, frameworks, architectures and various other ideas on analytics, trends and applications of transportation-related data

    Recurrent neural networks for solving matrix algebra problems

    Get PDF
    The aim of this dissertation is the application of recurrent neural networks (RNNs) to solving some problems from a matrix algebra with particular reference to the computations of the generalized inverses as well as solving the matrix equations of constant (timeinvariant) matrices. We examine the ability to exploit the correlation between the dynamic state equations of recurrent neural networks for computing generalized inverses and integral representations of these generalized inverses. Recurrent neural networks are composed of independent parts (sub-networks). These sub-networks can work simultaneously, so parallel and distributed processing can be accomplished. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. We investigate and exploit an analogy between the scaled hyperpower family (SHPI family) of iterative methods for computing the matrix inverse and the discretization of Zhang Neural Network (ZNN) models. A class of ZNN models corresponding to the family of hyperpower iterative methods for computing the generalized inverses on the basis of the discovered analogy is defined. The Matlab Simulink implementation of the introduced ZNN models is described in the case of scaled hyperpower methods of the order 2 and 3. We present the Matlab Simulink model of a hybrid recursive neural implicit dynamics and give a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model
    corecore