10 research outputs found
Suboptimal Safety-Critical Control for Continuous Systems Using Prediction-Correction Online Optimization
This paper investigates the control barrier function (CBF) based
safety-critical control for continuous nonlinear control affine systems using
more efficient online algorithms by the time-varying optimization method. The
idea of the algorithms is that when quadratic programming (QP) or other convex
optimization algorithms needed in the CBF-based method is not computation
affordable, the alternative suboptimal feasible solutions can be obtained more
economically. By using the barrier-based interior point method, the constrained
CBF-QP problems are transformed into unconstrained ones with suboptimal
solutions tracked by two continuous descent-based algorithms. Considering the
lag effect of tracking and exploiting the system information, the prediction
method is added to the algorithms, which achieves exponential convergence to
the time-varying suboptimal solutions. The convergence and robustness of the
designed methods as well as the safety criteria of the algorithms are studied
theoretically. The effectiveness is illustrated by simulations on the
anti-swing and obstacle avoidance tasks
Distributed Online Optimization via Gradient Tracking with Adaptive Momentum
This paper deals with a network of computing agents aiming to solve an online
optimization problem in a distributed fashion, i.e., by means of local
computation and communication, without any central coordinator. We propose the
gradient tracking with adaptive momentum estimation (GTAdam) distributed
algorithm, which combines a gradient tracking mechanism with first and second
order momentum estimates of the gradient. The algorithm is analyzed in the
online setting for strongly convex and smooth cost functions. We prove that the
average dynamic regret is bounded and that the convergence rate is linear. The
algorithm is tested on a time-varying classification problem, on a (moving)
target localization problem and in a stochastic optimization setup from image
classification. In these numerical experiments from multi-agent learning,
GTAdam outperforms state-of-the-art distributed optimization methods
Approximate Sensitivity Conditioning and Singular Perturbation Analysis for Power Converters
A feed-forward sensitivity conditioning control strategy is analyzed in this
paper and it is applied to power electronic converters. The feed-forward term
is used to improve closed loop systems, such as power converters with cascaded
inner and outer loop controllers. The impact of the feed-forward sensitivity
term is analyzed using singular perturbation theory. In addition, the
implementation of the feed-forward control term is addressed for practical
systems, where the number of inputs is generally not sufficient for exact
sensitivity conditioning. Simulation results are presented for a buck converter
with output capacitor voltage regulation and a Permanent Magnet Synchronous
Machine (PMSM), used as a generator with an active rectifier. Finally,
experimental results are presented for the buck converter, demonstrating the
advantages and feasibility in implementing the approximate sensitivity
conditioning term for closed loop power converters
Distributed Asynchronous Discrete-Time Feedback Optimization
In this article, we present an algorithm that drives the outputs of a network
of agents to jointly track the solutions of time-varying optimization problems
in a way that is robust to asynchrony in the agents' operations. We consider
three operations that can be asynchronous: (1) computations of control inputs,
(2) measurements of network outputs, and (3) communications of agents' inputs
and outputs. We first show that our algorithm converges to the solution of a
time-invariant feedback optimization problem in linear time. Next, we show that
our algorithm drives outputs to track the solution of time-varying feedback
optimization problems within a bounded error dependent upon the movement of the
minimizers and degree of asynchrony in a way that we make precise. These
convergence results are extended to quantify agents' asymptotic behavior as the
length of their time horizon approaches infinity. Then, to ensure satisfactory
network performance, we specify the timing of agents' operations relative to
changes in the objective function that ensure a desired error bound. Numerical
experiments confirm these developments and show the success of our distributed
feedback optimization algorithm under asynchrony