156 research outputs found

    The Magnus expansion and some of its applications

    Get PDF
    Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as Time-Dependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial re-summation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related non-perturbative expansions. Second, to provide a bridge with its implementation as generator of especial purpose numerical integration methods, a field of intense activity during the last decade. Third, to illustrate with examples the kind of results one can expect from Magnus expansion in comparison with those from both perturbative schemes and standard numerical integrators. We buttress this issue with a revision of the wide range of physical applications found by Magnus expansion in the literature.Comment: Report on the Magnus expansion for differential equations and its applications to several physical problem

    Two-parameter Sturm-Liouville problems

    Full text link
    This paper deals with the computation of the eigenvalues of two-parameter Sturm- Liouville (SL) problems using the Regularized Sampling Method, a method which has been effective in computing the eigenvalues of broad classes of SL problems (Singular, Non-Self-Adjoint, Non-Local, Impulsive,...). We have shown, in this work that it can tackle two-parameter SL problems with equal ease. An example was provided to illustrate the effectiveness of the method.Comment: 9 page

    Chen-Fliess Series for Linear Distributed Systems

    Get PDF
    Distributed systems like fluid flow and heat transfer are modeled by partial differential equations (PDEs). In control theory, distributed systems are generally reformulated in terms of a linear state space realization, where the state space is an infinite dimensional Banach space or Hilbert space. In the finite dimension case, the input-output map can always be written in terms of a Chen-Fliess functional series, that is, a weighted sum of iterated integrals of the components of the input function. The Chen-Fliess functional series has been used to describe interconnected nonlinear systems, to solve system inversion and tracking problems, and to design predictive and adaptive controllers. The main goal of this thesis is to show that there is a generalized notion of a Chen-Fliess series for linear distributed systems where the weights are now linear operators acting on the iterated integrals. Sufficient conditions for convergence are developed. The method is compared against classical PDE theory using a number of first-order and second-order examples

    Continuity of Chen-Fliess Series for Applications in System Identification and Machine Learning

    Get PDF
    Model continuity plays an important role in applications like system identification, adaptive control, and machine learning. This paper provides sufficient conditions under which input-output systems represented by locally convergent Chen-Fliess series are jointly continuous with respect to their generating series and as operators mapping a ball in an LpL_p-space to a ball in an LqL_q-space, where pp and qq are conjugate exponents. The starting point is to introduce a class of topological vector spaces known as Silva spaces to frame the problem and then to employ the concept of a direct limit to describe convergence. The proof of the main continuity result combines elements of proofs for other forms of continuity appearing in the literature to produce the desired conclusion.Comment: 17 pages, 1 figure, 24th International Symposium on Mathematical Theory of Networks and Systems, (MTNS 2020

    A Spectral Algorithm for Learning Hidden Markov Models

    Get PDF
    Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typically resort to search heuristics which suffer from the usual local optima issues. We prove that under a natural separation condition (bounds on the smallest singular value of the HMM parameters), there is an efficient and provably correct algorithm for learning HMMs. The sample complexity of the algorithm does not explicitly depend on the number of distinct (discrete) observations---it implicitly depends on this quantity through spectral properties of the underlying HMM. This makes the algorithm particularly applicable to settings with a large number of observations, such as those in natural language processing where the space of observation is sometimes the words in a language. The algorithm is also simple, employing only a singular value decomposition and matrix multiplications.Comment: Published in JCSS Special Issue "Learning Theory 2009
    corecore