1,212 research outputs found
Hessian barrier algorithms for linearly constrained optimization problems
In this paper, we propose an interior-point method for linearly constrained
optimization problems (possibly nonconvex). The method - which we call the
Hessian barrier algorithm (HBA) - combines a forward Euler discretization of
Hessian Riemannian gradient flows with an Armijo backtracking step-size policy.
In this way, HBA can be seen as an alternative to mirror descent (MD), and
contains as special cases the affine scaling algorithm, regularized Newton
processes, and several other iterative solution methods. Our main result is
that, modulo a non-degeneracy condition, the algorithm converges to the
problem's set of critical points; hence, in the convex case, the algorithm
converges globally to the problem's minimum set. In the case of linearly
constrained quadratic programs (not necessarily convex), we also show that the
method's convergence rate is for some
that depends only on the choice of kernel function (i.e., not on the problem's
primitives). These theoretical results are validated by numerical experiments
in standard non-convex test functions and large-scale traffic assignment
problems.Comment: 27 pages, 6 figure
On the Resilience of Traffic Networks under Non-Equilibrium Learning
We investigate the resilience of learning-based \textit{Intelligent
Navigation Systems} (INS) to informational flow attacks, which exploit the
vulnerabilities of IT infrastructure and manipulate traffic condition data. To
this end, we propose the notion of \textit{Wardrop Non-Equilibrium Solution}
(WANES), which captures the finite-time behavior of dynamic traffic flow
adaptation under a learning process. The proposed non-equilibrium solution,
characterized by target sets and measurement functions, evaluates the outcome
of learning under a bounded number of rounds of interactions, and it pertains
to and generalizes the concept of approximate equilibrium. Leveraging
finite-time analysis methods, we discover that under the mirror descent (MD)
online-learning framework, the traffic flow trajectory is capable of restoring
to the Wardrop non-equilibrium solution after a bounded INS attack. The
resulting performance loss is of order
(), with a constant dependent on the size of the
traffic network, indicating the resilience of the MD-based INS. We corroborate
the results using an evacuation case study on a Sioux-Fall transportation
network.Comment: 8 pages, 3 figures, with a technical appendi
Is Stochastic Mirror Descent Vulnerable to Adversarial Delay Attacks? A Traffic Assignment Resilience Study
\textit{Intelligent Navigation Systems} (INS) are exposed to an increasing
number of informational attack vectors, which often intercept through the
communication channels between the INS and the transportation network during
the data collecting process. To measure the resilience of INS, we use the
concept of a Wardrop Non-Equilibrium Solution (WANES), which is characterized
by the probabilistic outcome of learning within a bounded number of
interactions. By using concentration arguments, we have discovered that any
bounded feedback delaying attack only degrades the systematic performance up to
order along the traffic flow
trajectory within the Delayed Mirror Descent (DMD) online-learning framework.
This degradation in performance can occur with only mild assumptions imposed.
Our result implies that learning-based INS infrastructures can achieve Wardrop
Non-equilibrium even when experiencing a certain period of disruption in the
information structure. These findings provide valuable insights for designing
defense mechanisms against possible jamming attacks across different layers of
the transportation ecosystem.Comment: Preprint under revie
Hessian barrier algorithms for linearly constrained optimization problems
International audienceIn this paper, we propose an interior-point method for linearly constrained-and possibly nonconvex-optimization problems. The method-which we call the Hessian barrier algorithm (HBA)-combines a forward Euler discretization of Hessian-Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent, and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a nondegeneracy condition, the algorithm converges to the problem's critical set; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is for some that depends only on the choice of kernel function (i.e., not on the problem's primi-tives). These theoretical results are validated by numerical experiments in standard nonconvex test functions and large-scale traffic assignment problems
ON THE CONVERGENCE OF GRADIENT-LIKE FLOWS WITH NOISY GRADIENT INPUT
In view of solving convex optimization problems with noisy gradient input, we
analyze the asymptotic behavior of gradient-like flows under stochastic
disturbances. Specifically, we focus on the widely studied class of mirror
descent schemes for convex programs with compact feasible regions, and we
examine the dynamics' convergence and concentration properties in the presence
of noise. In the vanishing noise limit, we show that the dynamics converge to
the solution set of the underlying problem (a.s.). Otherwise, when the noise is
persistent, we show that the dynamics are concentrated around interior
solutions in the long run, and they converge to boundary solutions that are
sufficiently "sharp". Finally, we show that a suitably rectified variant of the
method converges irrespective of the magnitude of the noise (or the structure
of the underlying convex program), and we derive an explicit estimate for its
rate of convergence.Comment: 36 pages, 5 figures; revised proof structure, added numerical case
study in Section
Scaling up Mean Field Games with Online Mirror Descent
We address scaling up equilibrium computation in Mean Field Games (MFGs)
using Online Mirror Descent (OMD). We show that continuous-time OMD provably
converges to a Nash equilibrium under a natural and well-motivated set of
monotonicity assumptions. This theoretical result nicely extends to
multi-population games and to settings involving common noise. A thorough
experimental investigation on various single and multi-population MFGs shows
that OMD outperforms traditional algorithms such as Fictitious Play (FP). We
empirically show that OMD scales up and converges significantly faster than FP
by solving, for the first time to our knowledge, examples of MFGs with hundreds
of billions states. This study establishes the state-of-the-art for learning in
large-scale multi-agent and multi-population games
- …