10,925 research outputs found

    Sequence-based Anytime Control

    Full text link
    We present two related anytime algorithms for control of nonlinear systems when the processing resources available are time-varying. The basic idea is to calculate tentative control input sequences for as many time steps into the future as allowed by the available processing resources at every time step. This serves to compensate for the time steps when the processor is not available to perform any control calculations. Using a stochastic Lyapunov function based approach, we analyze the stability of the resulting closed loop system for the cases when the processor availability can be modeled as an independent and identically distributed sequence and via an underlying Markov chain. Numerical simulations indicate that the increase in performance due to the proposed algorithms can be significant.Comment: 14 page

    Sparse and Constrained Stochastic Predictive Control for Networked Systems

    Full text link
    This article presents a novel class of control policies for networked control of Lyapunov-stable linear systems with bounded inputs. The control channel is assumed to have i.i.d. Bernoulli packet dropouts and the system is assumed to be affected by additive stochastic noise. Our proposed class of policies is affine in the past dropouts and saturated values of the past disturbances. We further consider a regularization term in a quadratic performance index to promote sparsity in control. We demonstrate how to augment the underlying optimization problem with a constant negative drift constraint to ensure mean-square boundedness of the closed-loop states, yielding a convex quadratic program to be solved periodically online. The states of the closed-loop plant under the receding horizon implementation of the proposed class of policies are mean square bounded for any positive bound on the control and any non-zero probability of successful transmission

    Improved Distributed Estimation Method for Environmental\ud time-variant Physical variables in Static Sensor Networks

    Get PDF
    In this paper, an improved distributed estimation scheme for static sensor networks is developed. The scheme is developed for environmental time-variant physical variables. The main contribution of this work is that the algorithm in [1]-[3] has been extended, and a filter has been designed with weights, such that the variance of the estimation errors is minimized, thereby improving the filter design considerably\ud and characterizing the performance limit of the filter, and thereby tracking a time-varying signal. Moreover, certain parameter optimization is alleviated with the application of a particular finite impulse response (FIR) filter. Simulation results are showing the effectiveness of the developed estimation algorithm

    A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints

    Full text link
    We develop a simple routine unifying the analysis of several important recently-developed stochastic optimization methods including SAGA, Finito, and stochastic dual coordinate ascent (SDCA). First, we show an intrinsic connection between stochastic optimization methods and dynamic jump systems, and propose a general jump system model for stochastic optimization methods. Our proposed model recovers SAGA, SDCA, Finito, and SAG as special cases. Then we combine jump system theory with several simple quadratic inequalities to derive sufficient conditions for convergence rate certifications of the proposed jump system model under various assumptions (with or without individual convexity, etc). The derived conditions are linear matrix inequalities (LMIs) whose sizes roughly scale with the size of the training set. We make use of the symmetry in the stochastic optimization methods and reduce these LMIs to some equivalent small LMIs whose sizes are at most 3 by 3. We solve these small LMIs to provide analytical proofs of new convergence rates for SAGA, Finito and SDCA (with or without individual convexity). We also explain why our proposed LMI fails in analyzing SAG. We reveal a key difference between SAG and other methods, and briefly discuss how to extend our LMI analysis for SAG. An advantage of our approach is that the proposed analysis can be automated for a large class of stochastic methods under various assumptions (with or without individual convexity, etc).Comment: To Appear in Proceedings of the Annual Conference on Learning Theory (COLT) 201
    corecore