31 research outputs found

    A novel quaternion linear matrix equation solver through zeroing neural networks with applications to acoustic source tracking

    Get PDF
    Due to its significance in science and engineering, time-varying linear matrix equation (LME) problems have received a lot of attention from scholars. It is for this reason that the issue of finding the minimum-norm least-squares solution of the time-varying quaternion LME (ML-TQ-LME) is addressed in this study. This is accomplished using the zeroing neural network (ZNN) technique, which has achieved considerable success in tackling time-varying issues. In light of that, two new ZNN models are introduced to solve the ML-TQ-LME problem for time-varying quaternion matrices of arbitrary dimension. Two simulation experiments and two practical acoustic source tracking applications show that the models function superbly

    Complex Noise-Resistant Zeroing Neural Network for Computing Complex Time-Dependent Lyapunov Equation

    Get PDF
    Complex time-dependent Lyapunov equation (CTDLE), as an important means of stability analysis of control systems, has been extensively employed in mathematics and engineering application fields. Recursive neural networks (RNNs) have been reported as an effective method for solving CTDLE. In the previous work, zeroing neural networks (ZNNs) have been established to find the accurate solution of time-dependent Lyapunov equation (TDLE) in the noise-free conditions. However, noises are inevitable in the actual implementation process. In order to suppress the interference of various noises in practical applications, in this paper, a complex noise-resistant ZNN (CNRZNN) model is proposed and employed for the CTDLE solution. Additionally, the convergence and robustness of the CNRZNN model are analyzed and proved theoretically. For verification and comparison, three experiments and the existing noise-tolerant ZNN (NTZNN) model are introduced to investigate the effectiveness, convergence and robustness of the CNRZNN model. Compared with the NTZNN model, the CNRZNN model has more generality and stronger robustness. Specifically, the NTZNN model is a special form of the CNRZNN model, and the residual error of CNRZNN can converge rapidly and stably to order 10−5 when solving CTDLE under complex linear noises, which is much lower than order 10−1 of the NTZNN model. Analogously, under complex quadratic noises, the residual error of the CNRZNN model can converge to 2∥A∥F/ζ3 quickly and stably, while the residual error of the NTZNN model is divergent

    Solving quaternion nonsymmetric algebraic Riccati equations through zeroing neural networks

    Get PDF
    Many variations of the algebraic Riccati equation (ARE) have been used to study nonlinear system stability in the control domain in great detail. Taking the quaternion nonsymmetric ARE (QNARE) as a generalized version of ARE, the time-varying QNARE (TQNARE) is introduced. This brings us to the main objective of this work: finding the TQNARE solution. The zeroing neural network (ZNN) technique, which has demonstrated a high degree of effectiveness in handling time-varying problems, is used to do this. Specifically, the TQNARE can be solved using the high order ZNN (HZNN) design, which is a member of the family of ZNN models that correlate to hyperpower iterative techniques. As a result, a novel HZNN model, called HZ-QNARE, is presented for solving the TQNARE. The model functions fairly well, as demonstrated by two simulation tests. Additionally, the results demonstrated that, while both approaches function remarkably well, the HZNN architecture works better than the ZNN architecture

    Recurrent neural networks for solving matrix algebra problems

    Get PDF
    The aim of this dissertation is the application of recurrent neural networks (RNNs) to solving some problems from a matrix algebra with particular reference to the computations of the generalized inverses as well as solving the matrix equations of constant (timeinvariant) matrices. We examine the ability to exploit the correlation between the dynamic state equations of recurrent neural networks for computing generalized inverses and integral representations of these generalized inverses. Recurrent neural networks are composed of independent parts (sub-networks). These sub-networks can work simultaneously, so parallel and distributed processing can be accomplished. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. We investigate and exploit an analogy between the scaled hyperpower family (SHPI family) of iterative methods for computing the matrix inverse and the discretization of Zhang Neural Network (ZNN) models. A class of ZNN models corresponding to the family of hyperpower iterative methods for computing the generalized inverses on the basis of the discovered analogy is defined. The Matlab Simulink implementation of the introduced ZNN models is described in the case of scaled hyperpower methods of the order 2 and 3. We present the Matlab Simulink model of a hybrid recursive neural implicit dynamics and give a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model

    Statistical Analysis of Networks

    Get PDF
    This book is a general introduction to the statistical analysis of networks, and can serve both as a research monograph and as a textbook. Numerous fundamental tools and concepts needed for the analysis of networks are presented, such as network modeling, community detection, graph-based semi-supervised learning and sampling in networks. The description of these concepts is self-contained, with both theoretical justifications and applications provided for the presented algorithms. Researchers, including postgraduate students, working in the area of network science, complex network analysis, or social network analysis, will find up-to-date statistical methods relevant to their research tasks. This book can also serve as textbook material for courses related to the statistical approach to the analysis of complex networks. In general, the chapters are fairly independent and self-supporting, and the book could be used for course composition “à la carte”. Nevertheless, Chapter 2 is needed to a certain degree for all parts of the book. It is also recommended to read Chapter 4 before reading Chapters 5 and 6, but this is not absolutely necessary. Reading Chapter 3 can also be helpful before reading Chapters 5 and 7. As prerequisites for reading this book, a basic knowledge in probability, linear algebra and elementary notions of graph theory is advised. Appendices describing required notions from the above mentioned disciplines have been added to help readers gain further understanding
    corecore