26 research outputs found
Distributed Online Convex Optimization with an Aggregative Variable
This paper investigates distributed online convex optimization in the
presence of an aggregative variable without any global/central coordinators
over a multi-agent network, where each individual agent is only able to access
partial information of time-varying global loss functions, thus requiring local
information exchanges between neighboring agents. Motivated by many
applications in reality, the considered local loss functions depend not only on
their own decision variables, but also on an aggregative variable, such as the
average of all decision variables. To handle this problem, an Online
Distributed Gradient Tracking algorithm (O-DGT) is proposed with exact gradient
information and it is shown that the dynamic regret is upper bounded by three
terms: a sublinear term, a path variation term, and a gradient variation term.
Meanwhile, the O-DGT algorithm is also analyzed with stochastic/noisy
gradients, showing that the expected dynamic regret has the same upper bound as
the exact gradient case. To our best knowledge, this paper is the first to
study online convex optimization in the presence of an aggregative variable,
which enjoys new characteristics in comparison with the conventional scenario
without the aggregative variable. Finally, a numerical experiment is provided
to corroborate the obtained theoretical results
Distributed Online Convex Optimization with Adversarial Constraints: Reduced Cumulative Constraint Violation Bounds under Slater's Condition
This paper considers distributed online convex optimization with adversarial
constraints. In this setting, a network of agents makes decisions at each
round, and then only a portion of the loss function and a coordinate block of
the constraint function are privately revealed to each agent. The loss and
constraint functions are convex and can vary arbitrarily across rounds. The
agents collaborate to minimize network regret and cumulative constraint
violation. A novel distributed online algorithm is proposed and it achieves an
network regret bound and an
network cumulative constraint violation bound, where
is the number of rounds and is a user-defined trade-off
parameter. When Slater's condition holds (i.e, there is a point that strictly
satisfies the inequality constraints), the network cumulative constraint
violation bound is reduced to . Moreover, if the loss
functions are strongly convex, then the network regret bound is reduced to
, and the network cumulative constraint violation bound
is reduced to and without
and with Slater's condition, respectively. To the best of our knowledge, this
paper is the first to achieve reduced (network) cumulative constraint violation
bounds for (distributed) online convex optimization with adversarial
constraints under Slater's condition. Finally, the theoretical results are
verified through numerical simulations
Projection-Free Online Convex Optimization with Stochastic Constraints
This paper develops projection-free algorithms for online convex optimization
with stochastic constraints. We design an online primal-dual projection-free
framework that can take any projection-free algorithms developed for online
convex optimization with no long-term constraint. With this general template,
we deduce sublinear regret and constraint violation bounds for various
settings. Moreover, for the case where the loss and constraint functions are
smooth, we develop a primal-dual conditional gradient method that achieves
regret and constraint violations. Furthermore, for
the setting where the loss and constraint functions are stochastic and strong
duality holds for the associated offline stochastic optimization problem, we
prove that the constraint violation can be reduced to have the same asymptotic
growth as the regret