28 research outputs found
Efficient Online Convex Optimization with Adaptively Minimax Optimal Dynamic Regret
We introduce an online convex optimization algorithm using projected
sub-gradient descent with ideal adaptive learning rates, where each computation
is efficiently done in a sequential manner. For the first time in the
literature, this algorithm provides an adaptively minimax optimal dynamic
regret guarantee for a sequence of convex functions without any restrictions --
such as strong convexity, smoothness or even Lipschitz continuity -- against a
comparator decision sequence with bounded total successive changes. We show
optimality by generating the worst-case dynamic regret adaptive lower bound,
which constitutes of actual sub-gradient norms and matches with our guarantees.
We discuss the advantages of our algorithm as opposed to adaptive projection
with sub-gradient self outer products and also derive the extension for
independent learning in each decision coordinate separately. Additionally, we
demonstrate how to best preserve our guarantees when the bound on total
successive changes in the dynamic comparator sequence grows as time goes, in a
truly online manner.Comment: 10 pages, 1 figure, preprint, [v0] 201
Dynamic and Distributed Online Convex Optimization for Demand Response of Commercial Buildings
We extend the regret analysis of the online distributed weighted dual
averaging (DWDA) algorithm [1] to the dynamic setting and provide the tightest
dynamic regret bound known to date with respect to the time horizon for a
distributed online convex optimization (OCO) algorithm. Our bound is linear in
the cumulative difference between consecutive optima and does not depend
explicitly on the time horizon. We use dynamic-online DWDA (D-ODWDA) and
formulate a performance-guaranteed distributed online demand response approach
for heating, ventilation, and air-conditioning (HVAC) systems of commercial
buildings. We show the performance of our approach for fast timescale demand
response in numerical simulations and obtain demand response decisions that
closely reproduce the centralized optimal ones
Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics
This paper focuses on the problem of recursive nonlinear least squares
parameter estimation in multi-agent networks, in which the individual agents
observe sequentially over time an independent and identically distributed
(i.i.d.) time-series consisting of a nonlinear function of the true but unknown
parameter corrupted by noise. A distributed recursive estimator of the
\emph{consensus} + \emph{innovations} type, namely , is
proposed, in which the agents update their parameter estimates at each
observation sampling epoch in a collaborative way by simultaneously processing
the latest locally sensed information~(\emph{innovations}) and the parameter
estimates from other agents~(\emph{consensus}) in the local neighborhood
conforming to a pre-specified inter-agent communication topology. Under rather
weak conditions on the connectivity of the inter-agent communication and a
\emph{global observability} criterion, it is shown that at every network agent,
the proposed algorithm leads to consistent parameter estimates. Furthermore,
under standard smoothness assumptions on the local observation functions, the
distributed estimator is shown to yield order-optimal convergence rates, i.e.,
as far as the order of pathwise convergence is concerned, the local parameter
estimates at each agent are as good as the optimal centralized nonlinear least
squares estimator which would require access to all the observations across all
the agents at all times. In order to benchmark the performance of the proposed
distributed estimator with that of the centralized nonlinear
least squares estimator, the asymptotic normality of the estimate sequence is
established and the asymptotic covariance of the distributed estimator is
evaluated. Finally, simulation results are presented which illustrate and
verify the analytical findings.Comment: 28 pages. Initial Submission: Feb. 2016, Revised: July 2016,
Accepted: September 2016, To appear in IEEE Transactions on Signal and
Information Processing over Networks: Special Issue on Inference and Learning
over Network
Distributed Online Convex Optimization with an Aggregative Variable
This paper investigates distributed online convex optimization in the
presence of an aggregative variable without any global/central coordinators
over a multi-agent network, where each individual agent is only able to access
partial information of time-varying global loss functions, thus requiring local
information exchanges between neighboring agents. Motivated by many
applications in reality, the considered local loss functions depend not only on
their own decision variables, but also on an aggregative variable, such as the
average of all decision variables. To handle this problem, an Online
Distributed Gradient Tracking algorithm (O-DGT) is proposed with exact gradient
information and it is shown that the dynamic regret is upper bounded by three
terms: a sublinear term, a path variation term, and a gradient variation term.
Meanwhile, the O-DGT algorithm is also analyzed with stochastic/noisy
gradients, showing that the expected dynamic regret has the same upper bound as
the exact gradient case. To our best knowledge, this paper is the first to
study online convex optimization in the presence of an aggregative variable,
which enjoys new characteristics in comparison with the conventional scenario
without the aggregative variable. Finally, a numerical experiment is provided
to corroborate the obtained theoretical results