52 research outputs found

    Complex Noise-Resistant Zeroing Neural Network for Computing Complex Time-Dependent Lyapunov Equation

    Get PDF
    Complex time-dependent Lyapunov equation (CTDLE), as an important means of stability analysis of control systems, has been extensively employed in mathematics and engineering application fields. Recursive neural networks (RNNs) have been reported as an effective method for solving CTDLE. In the previous work, zeroing neural networks (ZNNs) have been established to find the accurate solution of time-dependent Lyapunov equation (TDLE) in the noise-free conditions. However, noises are inevitable in the actual implementation process. In order to suppress the interference of various noises in practical applications, in this paper, a complex noise-resistant ZNN (CNRZNN) model is proposed and employed for the CTDLE solution. Additionally, the convergence and robustness of the CNRZNN model are analyzed and proved theoretically. For verification and comparison, three experiments and the existing noise-tolerant ZNN (NTZNN) model are introduced to investigate the effectiveness, convergence and robustness of the CNRZNN model. Compared with the NTZNN model, the CNRZNN model has more generality and stronger robustness. Specifically, the NTZNN model is a special form of the CNRZNN model, and the residual error of CNRZNN can converge rapidly and stably to order 10−5 when solving CTDLE under complex linear noises, which is much lower than order 10−1 of the NTZNN model. Analogously, under complex quadratic noises, the residual error of the CNRZNN model can converge to 2∥A∥F/ζ3 quickly and stably, while the residual error of the NTZNN model is divergent

    A novel quaternion linear matrix equation solver through zeroing neural networks with applications to acoustic source tracking

    Get PDF
    Due to its significance in science and engineering, time-varying linear matrix equation (LME) problems have received a lot of attention from scholars. It is for this reason that the issue of finding the minimum-norm least-squares solution of the time-varying quaternion LME (ML-TQ-LME) is addressed in this study. This is accomplished using the zeroing neural network (ZNN) technique, which has achieved considerable success in tackling time-varying issues. In light of that, two new ZNN models are introduced to solve the ML-TQ-LME problem for time-varying quaternion matrices of arbitrary dimension. Two simulation experiments and two practical acoustic source tracking applications show that the models function superbly

    Zeroing neural networks for computing quaternion linear matrix equation with application to color restoration of images

    Get PDF
    The importance of quaternions in a variety of fields, such as physics, engineering and computer science, renders the effective solution of the time-varying quaternion matrix linear equation (TV-QLME) an equally important and interesting task. Zeroing neural networks (ZNN) have seen great success in solving TV problems in the real and complex domains, while quaternions and matrices of quaternions may be readily represented as either a complex or a real matrix, of magnified size. On that account, three new ZNN models are developed and the TV-QLME is solved directly in the quaternion domain as well as indirectly in the complex and real domains for matrices of arbitrary dimension. The models perform admirably in four simulation experiments and two practical applications concerning color restoration of images

    Applying fixed point techniques for obtaining a positive definite solution to nonlinear matrix equations

    Get PDF
    In this manuscript, the concept of rational-type multivalued F−contraction mappings is investigated. In addition, some nice fixed point results are obtained using this concept in the setting of MM−spaces and ordered MM−spaces. Our findings extend, unify, and generalize a large body of work along the same lines. Moreover, to support and strengthen our results, non-trivial and extensive examples are presented. Ultimately, the theoretical results are involved in obtaining a positive, definite solution to nonlinear matrix equations as an application

    Visual Steering for One-Shot Deep Neural Network Synthesis

    Full text link
    Recent advancements in the area of deep learning have shown the effectiveness of very large neural networks in several applications. However, as these deep neural networks continue to grow in size, it becomes more and more difficult to configure their many parameters to obtain good results. Presently, analysts must experiment with many different configurations and parameter settings, which is labor-intensive and time-consuming. On the other hand, the capacity of fully automated techniques for neural network architecture search is limited without the domain knowledge of human experts. To deal with the problem, we formulate the task of neural network architecture optimization as a graph space exploration, based on the one-shot architecture search technique. In this approach, a super-graph of all candidate architectures is trained in one-shot and the optimal neural network is identified as a sub-graph. In this paper, we present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge. Starting with the network architecture space composed of basic neural network components, analysts are empowered to effectively select the most promising components via our one-shot search scheme. Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application. During the exploration, analysts can use their domain knowledge aided by cues provided from a scatterplot visualization of the search space to edit different components and guide the search for faster convergence. We designed our interface in collaboration with several deep learning researchers and its final effectiveness is evaluated with a user study and two case studies.Comment: 9 pages, submitted to IEEE Transactions on Visualization and Computer Graphics, 202

    Hardware Learning in Analogue VLSI Neural Networks

    Get PDF

    A Unified Framework for Gradient-based Hyperparameter Optimization and Meta-learning

    Get PDF
    Machine learning algorithms and systems are progressively becoming part of our societies, leading to a growing need of building a vast multitude of accurate, reliable and interpretable models which should possibly exploit similarities among tasks. Automating segments of machine learning itself seems to be a natural step to undertake to deliver increasingly capable systems able to perform well in both the big-data and the few-shot learning regimes. Hyperparameter optimization (HPO) and meta-learning (MTL) constitute two building blocks of this growing effort. We explore these two topics under a unifying perspective, presenting a mathematical framework linked to bilevel programming that captures existing similarities and translates into procedures of practical interest rooted in algorithmic differentiation. We discuss the derivation, applicability and computational complexity of these methods and establish several approximation properties for a class of objective functions of the underlying bilevel programs. In HPO, these algorithms generalize and extend previous work on gradient-based methods. In MTL, the resulting framework subsumes classic and emerging strategies and provides a starting basis from which to build and analyze novel techniques. A series of examples and numerical simulations offer insight and highlight some limitations of these approaches. Experiments on larger-scale problems show the potential gains of the proposed methods in real-world applications. Finally, we develop two extensions of the basic algorithms apt to optimize a class of discrete hyperparameters (graph edges) in an application to relational learning and to tune online learning rate schedules for training neural network models, an old but crucially important issue in machine learning

    Reinforcement Learning and Planning for Preference Balancing Tasks

    Get PDF
    Robots are often highly non-linear dynamical systems with many degrees of freedom, making solving motion problems computationally challenging. One solution has been reinforcement learning (RL), which learns through experimentation to automatically perform the near-optimal motions that complete a task. However, high-dimensional problems and task formulation often prove challenging for RL. We address these problems with PrEference Appraisal Reinforcement Learning (PEARL), which solves Preference Balancing Tasks (PBTs). PBTs define a problem as a set of preferences that the system must balance to achieve a goal. The method is appropriate for acceleration-controlled systems with continuous state-space and either discrete or continuous action spaces with unknown system dynamics. We show that PEARL learns a sub-optimal policy on a subset of states and actions, and transfers the policy to the expanded domain to produce a more refined plan on a class of robotic problems. We establish convergence to task goal conditions, and even when preconditions are not verifiable, show that this is a valuable method to use before other more expensive approaches. Evaluation is done on several robotic problems, such as Aerial Cargo Delivery, Multi-Agent Pursuit, Rendezvous, and Inverted Flying Pendulum both in simulation and experimentally. Additionally, PEARL is leveraged outside of robotics as an array sorting agent. The results demonstrate high accuracy and fast learning times on a large set of practical applications
    • …
    corecore