14,804 research outputs found
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
This paper considers unconstrained convex optimization problems with
time-varying objective functions. We propose algorithms with a discrete
time-sampling scheme to find and track the solution trajectory based on
prediction and correction steps, while sampling the problem data at a constant
rate of , where is the length of the sampling interval. The prediction
step is derived by analyzing the iso-residual dynamics of the optimality
conditions. The correction step adjusts for the distance between the current
prediction and the optimizer at each time step, and consists either of one or
multiple gradient steps or Newton steps, which respectively correspond to the
gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT)
algorithms. Under suitable conditions, we establish that the asymptotic error
incurred by both proposed methods behaves as , and in some cases as
, which outperforms the state-of-the-art error bound of for
correction-only methods in the gradient-correction step. Moreover, when the
characteristics of the objective function variation are not available, we
propose approximate gradient and Newton tracking algorithms (AGT and ANT,
respectively) that still attain these asymptotical error bounds. Numerical
simulations demonstrate the practical utility of the proposed methods and that
they improve upon existing techniques by several orders of magnitude.Comment: 16 pages, 8 figure
Interior Point Method for Dynamic Constrained Optimization in Continuous Time
This paper considers a class of convex optimization problems where both, the
objective function and the constraints, have a continuously varying dependence
on time. Our goal is to develop an algorithm to track the optimal solution as
it continuously changes over time inside or on the boundary of the dynamic
feasible set. We develop an interior point method that asymptotically succeeds
in tracking this optimal point in nonstationary settings. The method utilizes a
time varying constraint slack and a prediction-correction structure that relies
on time derivatives of functions and constraints and Newton steps in the
spatial domain. Error free tracking is guaranteed under customary assumptions
on the optimization problems and time differentiability of objective and
constraints. The effectiveness of the method is illustrated in a problem that
involves multiple agents tracking multiple targets
Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization
We develop algorithms that find and track the optimal solution trajectory of
time-varying convex optimization problems which consist of local and
network-related objectives. The algorithms are derived from the
prediction-correction methodology, which corresponds to a strategy where the
time-varying problem is sampled at discrete time instances and then a sequence
is generated via alternatively executing predictions on how the optimizers at
the next time sample are changing and corrections on how they actually have
changed. Prediction is based on how the optimality conditions evolve in time,
while correction is based on a gradient or Newton method, leading to
Decentralized Prediction-Correction Gradient (DPC-G) and Decentralized
Prediction-Correction Newton (DPC-N). We extend these methods to cases where
the knowledge on how the optimization programs are changing in time is only
approximate and propose Decentralized Approximate Prediction-Correction
Gradient (DAPC-G) and Decentralized Approximate Prediction-Correction Newton
(DAPC-N). Convergence properties of all the proposed methods are studied and
empirical performance is shown on an application of a resource allocation
problem in a wireless network. We observe that the proposed methods outperform
existing running algorithms by orders of magnitude. The numerical results
showcase a trade-off between convergence accuracy, sampling period, and network
communications
Tracking Moving Agents via Inexact Online Gradient Descent Algorithm
Multi-agent systems are being increasingly deployed in challenging
environments for performing complex tasks such as multi-target tracking,
search-and-rescue, and intrusion detection. Notwithstanding the computational
limitations of individual robots, such systems rely on collaboration to sense
and react to the environment. This paper formulates the generic target tracking
problem as a time-varying optimization problem and puts forth an inexact online
gradient descent method for solving it sequentially. The performance of the
proposed algorithm is studied by characterizing its dynamic regret, a notion
common to the online learning literature. Building upon the existing results,
we provide improved regret rates that not only allow non-strongly convex costs
but also explicating the role of the cumulative gradient error. Two distinct
classes of problems are considered: one in which the objective function adheres
to a quadratic growth condition, and another where the objective function is
convex but the variable belongs to a compact domain. For both cases, results
are developed while allowing the error to be either adversarial or arising from
a white noise process. Further, the generality of the proposed framework is
demonstrated by developing online variants of existing stochastic gradient
algorithms and interpreting them as special cases of the proposed inexact
gradient method. The efficacy of the proposed inexact gradient framework is
established on a multi-agent multi-target tracking problem, while its
flexibility is exemplified by generating online movie recommendations for
Movielens M dataset
Time-Varying Convex Optimization via Time-Varying Averaged Operators
Devising efficient algorithms that track the optimizers of continuously
varying convex optimization problems is key in many applications. A possible
strategy is to sample the time-varying problem at constant rate and solve the
resulting time-invariant problem. This can be too computationally burdensome in
many scenarios. An alternative strategy is to set up an iterative algorithm
that generates a sequence of approximate optimizers, which are refined every
time a new sampled time-invariant problem is available by one iteration of the
algorithm. These types of algorithms are called running. A major limitation of
current running algorithms is their key assumption of strong convexity and
strong smoothness of the time-varying convex function. In addition, constraints
are only handled in simple cases. This limits the current capability for
running algorithms to tackle relevant problems, such as -regularized
optimization programs. In this paper, these assumptions are lifted by
leveraging averaged operator theory and a fairly comprehensive framework for
time-varying convex optimization is presented. In doing so, new results
characterizing the convergence of running versions of a number of widely used
algorithms are derived.Comment: 30 pages, 2 figures -- version 3: add three new sections with
additional results and background materia
A Stochastic Quasi-Newton Method for Large-Scale Optimization
The question of how to incorporate curvature information in stochastic
approximation methods is challenging. The direct application of classical
quasi- Newton updating techniques for deterministic optimization leads to noisy
curvature estimates that have harmful effects on the robustness of the
iteration. In this paper, we propose a stochastic quasi-Newton method that is
efficient, robust and scalable. It employs the classical BFGS update formula in
its limited memory form, and is based on the observation that it is beneficial
to collect curvature information pointwise, and at regular intervals, through
(sub-sampled) Hessian-vector products. This technique differs from the
classical approach that would compute differences of gradients, and where
controlling the quality of the curvature estimates can be difficult. We present
numerical results on problems arising in machine learning that suggest that the
proposed method shows much promise
Time-Varying Optimization: Algorithms and Engineering Applications
This is the write-up of the talk I gave at the 23rd International Symposium
on Mathematical Programming (ISMP) in Bordeaux, France, July 6th, 2018. The
talk was a general overview of the state of the art of time-varying, mainly
convex, optimization, with special emphasis on discrete-time algorithms and
applications in energy and transportation. This write-up is mathematically
correct, while its style is somewhat less formal than a standard paper.Comment: 10 pages, v2 corrects a typo in assumption
Time-Varying Convex Optimization: Time-Structured Algorithms and Applications
Optimization underpins many of the challenges that science and technology
face on a daily basis. Recent years have witnessed a major shift from
traditional optimization paradigms grounded on batch algorithms for
medium-scale problems to challenging dynamic, time-varying, and even huge-size
settings. This is driven by technological transformations that converted
infrastructural and social platforms into complex and dynamic networked systems
with even pervasive sensing and computing capabilities. The present paper
reviews a broad class of state-of-the-art algorithms for time-varying
optimization, with an eye to both algorithmic development and performance
analysis. It offers a comprehensive overview of available tools and methods,
and unveils open challenges in application domains of broad interest. The
real-world examples presented include smart power systems, robotics, machine
learning, and data analytics, highlighting domain-specific issues and
solutions. The ultimate goal is to exempify wide engineering relevance of
analytical tools and pertinent theoretical foundations.Comment: 14 pages, 6 figures; to appear in the Proceedings of the IEE
Distributed Constrained Online Learning
In this paper, we consider groups of agents in a network that select actions
in order to satisfy a set of constraints that vary arbitrarily over time and
minimize a time-varying function of which they have only local observations.
The selection of actions, also called a strategy, is causal and decentralized,
i.e., the dynamical system that determines the actions of a given agent depends
only on the constraints at the current time and on its own actions and those of
its neighbors. To determine such a strategy, we propose a decentralized saddle
point algorithm and show that the corresponding global fit and regret are
bounded by functions of the order of . Specifically, we define the
global fit of a strategy as a vector that integrates over time the global
constraint violations as seen by a given node. The fit is a performance loss
associated with online operation as opposed to offline clairvoyant operation
which can always select an action if one exists, that satisfies the constraints
at all times. If this fit grows sublinearly with the time horizon it suggests
that the strategy approaches the feasible set of actions. Likewise, we define
the regret of a strategy as the difference between its accumulated cost and
that of the best fixed action that one could select knowing beforehand the time
evolution of the objective function. Numerical examples support the theoretical
conclusions
Iterated Extended Kalman Smoother-based Variable Splitting for -Regularized State Estimation
In this paper, we propose a new framework for solving state estimation
problems with an additional sparsity-promoting -regularizer term. We first
formulate such problems as minimization of the sum of linear or nonlinear
quadratic error terms and an extra regularizer, and then present novel
algorithms which solve the linear and nonlinear cases. The methods are based on
a combination of the iterated extended Kalman smoother and variable splitting
techniques such as alternating direction method of multipliers (ADMM). We
present a general algorithmic framework for variable splitting methods, where
the iterative steps involving minimization of the nonlinear quadratic terms can
be computed efficiently by iterated smoothing. Due to the use of state
estimation algorithms, the proposed framework has a low per-iteration time
complexity, which makes it suitable for solving a large-scale or
high-dimensional state estimation problem. We also provide convergence results
for the proposed algorithms. The experiments show the promising performance and
speed-ups provided by the methods.Comment: 16 pages, 9 figure
- …