29,856 research outputs found

    Constant directions of the Riccati equation

    Get PDF
    A constant direction of the Riccati equation associated with a class of singular discrete-time optimization problems is defined. The set of constant directions is completely characterized using a control viewpoint. Constant directions are used to reduce the computational complexity of the optimal system. Application to optimal filtering in colored noise is given

    Data Transmission Over Networks for Estimation and Control

    Get PDF
    We consider the problem of controlling a linear time invariant process when the controller is located at a location remote from where the sensor measurements are being generated. The communication from the sensor to the controller is supported by a communication network with arbitrary topology composed of analog erasure channels. Using a separation principle, we prove that the optimal linear-quadratic-Gaussian (LQG) controller consists of an LQ optimal regulator along with an estimator that estimates the state of the process across the communication network. We then determine the optimal information processing strategy that should be followed by each node in the network so that the estimator is able to compute the best possible estimate in the minimum mean squared error sense. The algorithm is optimal for any packet-dropping process and at every time step, even though it is recursive and hence requires a constant amount of memory, processing and transmission at every node in the network per time step. For the case when the packet drop processes are memoryless and independent across links, we analyze the stability properties and the performance of the closed loop system. The algorithm is an attempt to escape the viewpoint of treating a network of communication links as a single end-to-end link with the probability of successful transmission determined by some measure of the reliability of the network

    Universal Nonlinear Filtering Using Feynman Path Integrals II: The Continuous-Continuous Model with Additive Noise

    Full text link
    In this paper, the Feynman path integral formulation of the continuous-continuous filtering problem, a fundamental problem of applied science, is investigated for the case when the noise in the signal and measurement model is additive. It is shown that it leads to an independent and self-contained analysis and solution of the problem. A consequence of this analysis is Feynman path integral formula for the conditional probability density that manifests the underlying physics of the problem. A corollary of the path integral formula is the Yau algorithm that has been shown to be superior to all other known algorithms. The Feynman path integral formulation is shown to lead to practical and implementable algorithms. In particular, the solution of the Yau PDE is reduced to one of function computation and integration.Comment: Interdisciplinary, 41 pages, 5 figures, JHEP3 class; added more discussion and reference

    Best Subset Selection via a Modern Optimization Lens

    Get PDF
    In the last twenty-five years (1990-2014), algorithmic advances in integer optimization combined with hardware improvements have resulted in an astonishing 200 billion factor speedup in solving Mixed Integer Optimization (MIO) problems. We present a MIO approach for solving the classical best subset selection problem of choosing kk out of pp features in linear regression given nn observations. We develop a discrete extension of modern first order continuous optimization methods to find high quality feasible solutions that we use as warm starts to a MIO solver that finds provably optimal solutions. The resulting algorithm (a) provides a solution with a guarantee on its suboptimality even if we terminate the algorithm early, (b) can accommodate side constraints on the coefficients of the linear regression and (c) extends to finding best subset solutions for the least absolute deviation loss function. Using a wide variety of synthetic and real datasets, we demonstrate that our approach solves problems with nn in the 1000s and pp in the 100s in minutes to provable optimality, and finds near optimal solutions for nn in the 100s and pp in the 1000s in minutes. We also establish via numerical experiments that the MIO approach performs better than {\texttt {Lasso}} and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.Comment: This is a revised version (May, 2015) of the first submission in June 201
    • …
    corecore