4 research outputs found
Tracking Moving Agents via Inexact Online Gradient Descent Algorithm
Multi-agent systems are being increasingly deployed in challenging
environments for performing complex tasks such as multi-target tracking,
search-and-rescue, and intrusion detection. Notwithstanding the computational
limitations of individual robots, such systems rely on collaboration to sense
and react to the environment. This paper formulates the generic target tracking
problem as a time-varying optimization problem and puts forth an inexact online
gradient descent method for solving it sequentially. The performance of the
proposed algorithm is studied by characterizing its dynamic regret, a notion
common to the online learning literature. Building upon the existing results,
we provide improved regret rates that not only allow non-strongly convex costs
but also explicating the role of the cumulative gradient error. Two distinct
classes of problems are considered: one in which the objective function adheres
to a quadratic growth condition, and another where the objective function is
convex but the variable belongs to a compact domain. For both cases, results
are developed while allowing the error to be either adversarial or arising from
a white noise process. Further, the generality of the proposed framework is
demonstrated by developing online variants of existing stochastic gradient
algorithms and interpreting them as special cases of the proposed inexact
gradient method. The efficacy of the proposed inexact gradient framework is
established on a multi-agent multi-target tracking problem, while its
flexibility is exemplified by generating online movie recommendations for
Movielens M dataset
Time-Varying Optimization: Algorithms and Engineering Applications
This is the write-up of the talk I gave at the 23rd International Symposium
on Mathematical Programming (ISMP) in Bordeaux, France, July 6th, 2018. The
talk was a general overview of the state of the art of time-varying, mainly
convex, optimization, with special emphasis on discrete-time algorithms and
applications in energy and transportation. This write-up is mathematically
correct, while its style is somewhat less formal than a standard paper.Comment: 10 pages, v2 corrects a typo in assumption
Time-Varying Convex Optimization via Time-Varying Averaged Operators
Devising efficient algorithms that track the optimizers of continuously
varying convex optimization problems is key in many applications. A possible
strategy is to sample the time-varying problem at constant rate and solve the
resulting time-invariant problem. This can be too computationally burdensome in
many scenarios. An alternative strategy is to set up an iterative algorithm
that generates a sequence of approximate optimizers, which are refined every
time a new sampled time-invariant problem is available by one iteration of the
algorithm. These types of algorithms are called running. A major limitation of
current running algorithms is their key assumption of strong convexity and
strong smoothness of the time-varying convex function. In addition, constraints
are only handled in simple cases. This limits the current capability for
running algorithms to tackle relevant problems, such as -regularized
optimization programs. In this paper, these assumptions are lifted by
leveraging averaged operator theory and a fairly comprehensive framework for
time-varying convex optimization is presented. In doing so, new results
characterizing the convergence of running versions of a number of widely used
algorithms are derived.Comment: 30 pages, 2 figures -- version 3: add three new sections with
additional results and background materia
Online Learning with Inexact Proximal Online Gradient Descent Algorithms
We consider non-differentiable dynamic optimization problems such as those
arising in robotics and subspace tracking. Given the computational constraints
and the time-varying nature of the problem, a low-complexity algorithm is
desirable, while the accuracy of the solution may only increase slowly over
time. We put forth the proximal online gradient descent (OGD) algorithm for
tracking the optimum of a composite objective function comprising of a
differentiable loss function and a non-differentiable regularizer. An online
learning framework is considered and the gradient of the loss function is
allowed to be erroneous. Both, the gradient error as well as the dynamics of
the function optimum or target are adversarial and the performance of the
inexact proximal OGD is characterized in terms of its dynamic regret, expressed
in terms of the cumulative error and path length of the target. The proposed
inexact proximal OGD is generalized for application to large-scale problems
where the loss function has a finite sum structure. In such cases, evaluation
of the full gradient may not be viable and a variance reduced version is
proposed that allows the component functions to be sub-sampled. The efficacy of
the proposed algorithms is tested on the problem of formation control in
robotics and on the dynamic foreground-background separation problem in video