65 research outputs found

    Geometric methods on low-rank matrix and tensor manifolds

    Get PDF
    In this chapter we present numerical methods for low-rank matrix and tensor problems that explicitly make use of the geometry of rank constrained matrix and tensor spaces. We focus on two types of problems: The first are optimization problems, like matrix and tensor completion, solving linear systems and eigenvalue problems. Such problems can be solved by numerical optimization for manifolds, called Riemannian optimization methods. We will explain the basic elements of differential geometry in order to apply such methods efficiently to rank constrained matrix and tensor spaces. The second type of problem is ordinary differential equations, defined on matrix and tensor spaces. We show how their solution can be approximated by the dynamical low-rank principle, and discuss several numerical integrators that rely in an essential way on geometric properties that are characteristic to sets of low rank matrices and tensors

    Tracking Control Based on Recurrent Neural Networks for Nonlinear Systems with Multiple Inputs and Unknown Deadzone

    Get PDF
    This paper deals with the problem of trajectory tracking for a broad class of uncertain nonlinear systems with multiple inputs each one subject to an unknown symmetric deadzone. On the basis of a model of the deadzone as a combination of a linear term and a disturbance-like term, a continuous-time recurrent neural network is directly employed in order to identify the uncertain dynamics. By using a Lyapunov analysis, the exponential convergence of the identification error to a bounded zone is demonstrated. Subsequently, by a proper control law, the state of the neural network is compelled to follow a bounded reference trajectory. This control law is designed in such a way that the singularity problem is conveniently avoided and the exponential convergence to a bounded zone of the difference between the state of the neural identifier and the reference trajectory can be proven. Thus, the exponential convergence of the tracking error to a bounded zone and the boundedness of all closed-loop signals can be guaranteed. One of the main advantages of the proposed strategy is that the controller can work satisfactorily without any specific knowledge of an upper bound for the unmodeled dynamics and/or the disturbance term

    5th EUROMECH nonlinear dynamics conference, August 7-12, 2005 Eindhoven : book of abstracts

    Get PDF

    5th EUROMECH nonlinear dynamics conference, August 7-12, 2005 Eindhoven : book of abstracts

    Get PDF

    Inertial and Second-order Optimization Algorithms for Training Neural Networks

    Get PDF
    Neural network models became highly popular during the last decade due to their efficiency in various applications. These are very large parametric models whose parameters must be set for each specific task. This crucial process of choosing the parameters, known as training, is done using large datasets. Due to the large amount of data and the size of the neural networks, the training phase is very expensive in terms of computational time and resources. From a mathematical point of view, training a neural network means solving a large-scale optimization problem. More specifically it involves the minimization of a sum of functions. The large-scale nature of the optimization problem highly restrains the types of algorithms available to minimize this sum of functions. In this context, standard algorithms almost exclusively rely on inexact gradients through the backpropagation method and mini-batch sub-sampling. As a result, firstorder methods such as stochastic gradient descent (SGD) remain the most used ones to train neural networks. Additionally, the function to minimize is non-convex and possibly nondifferentiable, resulting in limited convergence guarantees for these methods. In this thesis, we focus on building new algorithms exploiting second-order information only by means of noisy firstorder automatic differentiation. Starting from a dynamical system (an ordinary differential equation), we build INNA, an inertial and Newtonian algorithm. By analyzing together the dynamical system and INNA, we prove the convergence of the algorithm to the critical points of the function to minimize. Then, we show that the limit is actually a local minimum with overwhelming probability. Finally, we introduce Step-Tuned SGD that automatically adjusts the step-sizes of SGD. It does so by cleverly modifying the mini-batch sub-sampling allowing for an efficient discretization of second-order information. We prove the almost sure convergence of Step-Tuned SGD to critical points and provide rates of convergence. All the theoretical results are backed by promising numerical experiments on deep learning problems

    Safety-aware model-based reinforcement learning using barrier transformation

    Get PDF
    The ability to learn and execute optimal control policies safely is critical to the realization of complex autonomy, especially where task restarts are not available and/or when the systems are safety-critical. Safety requirements are often expressed in terms of state and/or control constraints. Methods such as barrier transformation and control barrier functions have been successfully used for safe learning in systems under state constraints and/or control constraints, in conjunction with model-based reinforcement learning to learn the optimal control policy. However, existing barrier-based safe learning methods rely on fully known models and full state feedback. In this thesis, two different safe model-based reinforcement learning techniques are developed. One of the techniques utilizes a novel filtered concurrent learning method to realize simultaneous learning and control in the presence of model uncertainties for safety-critical systems, and the other technique utilizes a novel dynamic state estimator to realize simultaneous learning and control for safety-critical systems with a partially observable state. The applicability of the developed techniques is demonstrated through simulations, and to illustrate their effectiveness, comparative simulations are presented wherever alternate methods exist to solve the problem under consideration. The thesis concludes with a discussion about the limitations of the developed techniques. Extensions of the developed techniques are also proposed along with the possible approaches to achieve them

    Adaptive control and neural network control of nonlinear discrete-time systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Optimization and Learning over Riemannian Manifolds

    Get PDF
    Learning over smooth nonlinear spaces has found wide applications. A principled approach for addressing such problems is to endow the search space with a Riemannian manifold geometry and numerical optimization can be performed intrinsically. Recent years have seen a surge of interest in leveraging Riemannian optimization for nonlinearly-constrained problems. This thesis investigates and improves on the existing algorithms for Riemannian optimization, with a focus on unified analysis frameworks and generic strategies. To this end, the first chapter systematically studies the choice of Riemannian geometries and their impacts on algorithmic convergence, on the manifold of positive definite matrices. The second chapter considers stochastic optimization on manifolds and proposes a unified framework for analyzing and improving the convergence of Riemannian variance reduction methods for nonconvex functions. The third chapter introduces a generic acceleration scheme based on the idea of extrapolation, which achieves optimal convergence rate asymptotically while being empirically efficient
    corecore