42,218 research outputs found

    Gradient methods for iterative distributed control synthesis

    Get PDF
    In this paper we present a gradient method to iteratively update local controllers of a distributed linear system driven by stochastic disturbances. The control objective is to minimize the sum of the variances of states and inputs in all nodes. We show that the gradients of this objective can be estimated distributively using data from a forward simulation of the system model and a backward simulation of the adjoint equations. Iterative updates of local controllers using the gradient estimates gives convergence towards a locally optimal distributed controller

    Design of First-Order Optimization Algorithms via Sum-of-Squares Programming

    Full text link
    In this paper, we propose a framework based on sum-of-squares programming to design iterative first-order optimization algorithms for smooth and strongly convex problems. Our starting point is to develop a polynomial matrix inequality as a sufficient condition for exponential convergence of the algorithm. The entries of this matrix are polynomial functions of the unknown parameters (exponential decay rate, stepsize, momentum coefficient, etc.). We then formulate a polynomial optimization, in which the objective is to optimize the exponential decay rate over the parameters of the algorithm. Finally, we use sum-of-squares programming as a tractable relaxation of the proposed polynomial optimization problem. We illustrate the utility of the proposed framework by designing a first-order algorithm that shares the same structure as Nesterov's accelerated gradient method

    A Distributed Scheduling Algorithm to Provide Quality-of-Service in Multihop Wireless Networks

    Full text link
    Control of multihop Wireless networks in a distributed manner while providing end-to-end delay requirements for different flows, is a challenging problem. Using the notions of Draining Time and Discrete Review from the theory of fluid limits of queues, an algorithm that meets delay requirements to various flows in a network is constructed. The algorithm involves an optimization which is implemented in a cyclic distributed manner across nodes by using the technique of iterative gradient ascent, with minimal information exchange between nodes. The algorithm uses time varying weights to give priority to flows. The performance of the algorithm is studied in a network with interference modelled by independent sets

    FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging

    Get PDF
    The total variation (TV) penalty, as many other analysis-sparsity problems, does not lead to separable factors or a proximal operatorwith a closed-form expression, such as soft thresholding for the _1\ell\_1 penalty. As a result, in a variational formulation of an inverse problem or statisticallearning estimation, it leads to challenging non-smooth optimization problemsthat are often solved with elaborate single-step first-order methods. When thedata-fit term arises from empirical measurements, as in brain imaging, it isoften very ill-conditioned and without simple structure. In this situation, in proximal splitting methods, the computation cost of thegradient step can easily dominate each iteration. Thus it is beneficialto minimize the number of gradient steps.We present fAASTA, a variant of FISTA, that relies on an internal solver forthe TV proximal operator, and refines its tolerance to balance computationalcost of the gradient and the proximal steps. We give benchmarks andillustrations on "brain decoding": recovering brain maps from noisymeasurements to predict observed behavior. The algorithm as well as theempirical study of convergence speed are valuable for any non-exact proximaloperator, in particular analysis-sparsity problems

    Warm-started wavefront reconstruction for adaptive optics

    Get PDF
    Future extreme adaptive optics (ExAO) systems have been suggested with up to 10^5 sensors and actuators. We analyze the computational speed of iterative reconstruction algorithms for such large systems. We compare a total of 15 different scalable methods, including multigrid, preconditioned conjugate-gradient, and several new variants of these. Simulations on a 128×128 square sensor/actuator geometry using Taylor frozen-flow dynamics are carried out using both open-loop and closed-loop measurements, and algorithms are compared on a basis of the mean squared error and floating-point multiplications required. We also investigate the use of warm starting, where the most recent estimate is used to initialize the iterative scheme. In open-loop estimation or pseudo-open-loop control, warm starting provides a significant computational speedup; almost every algorithm tested converges in one iteration. In a standard closed-loop implementation, using a single iteration per time step, most algorithms give the minimum error even in cold start, and every algorithm gives the minimum error if warm started. The best algorithm is therefore the one with the smallest computational cost per iteration, not necessarily the one with the best quasi-static performance
    corecore