3,650 research outputs found
Theory and Applications of Robust Optimization
In this paper we survey the primary research, both theoretical and applied,
in the area of Robust Optimization (RO). Our focus is on the computational
attractiveness of RO approaches, as well as the modeling power and broad
applicability of the methodology. In addition to surveying prominent
theoretical results of RO, we also present some recent results linking RO to
adaptable models for multi-stage decision-making problems. Finally, we
highlight applications of RO across a wide spectrum of domains, including
finance, statistics, learning, and various areas of engineering.Comment: 50 page
A Deterministic Theory for Exact Non-Convex Phase Retrieval
In this paper, we analyze the non-convex framework of Wirtinger Flow (WF) for
phase retrieval and identify a novel sufficient condition for universal exact
recovery through the lens of low rank matrix recovery theory. Via a perspective
in the lifted domain, we show that the convergence of the WF iterates to a true
solution is attained geometrically under a single condition on the lifted
forward model. As a result, a deterministic relationship between the accuracy
of spectral initialization and the validity of {the regularity condition} is
derived. In particular, we determine that a certain concentration property on
the spectral matrix must hold uniformly with a sufficiently tight constant.
This culminates into a sufficient condition that is equivalent to a restricted
isometry-type property over rank-1, positive semi-definite matrices, and
amounts to a less stringent requirement on the lifted forward model than those
of prominent low-rank-matrix-recovery methods in the literature. We
characterize the performance limits of our framework in terms of the tightness
of the concentration property via novel bounds on the convergence rate and on
the signal-to-noise ratio such that the theoretical guarantees are valid using
the spectral initialization at the proper sample complexity.Comment: In Revision for IEEE Transactions on Signal Processin
On the Sample Complexity and Optimization Landscape for Quadratic Feasibility Problems
We consider the problem of recovering a complex vector from quadratic measurements . This problem, known as quadratic feasibility,
encompasses the well known phase retrieval problem and has applications in a
wide range of important areas including power system state estimation and x-ray
crystallography. In general, not only is the the quadratic feasibility problem
NP-hard to solve, but it may in fact be unidentifiable. In this paper, we
establish conditions under which this problem becomes {identifiable}, and
further prove isometry properties in the case when the matrices
are Hermitian matrices sampled from a complex Gaussian
distribution. Moreover, we explore a nonconvex {optimization} formulation of
this problem, and establish salient features of the associated optimization
landscape that enables gradient algorithms with an arbitrary initialization to
converge to a \emph{globally optimal} point with a high probability. Our
results also reveal sample complexity requirements for successfully identifying
a feasible solution in these contexts.Comment: 21 page
Towards a Theoretical Foundation of Policy Optimization for Learning Control Policies
Gradient-based methods have been widely used for system design and
optimization in diverse application domains. Recently, there has been a renewed
interest in studying theoretical properties of these methods in the context of
control and reinforcement learning. This article surveys some of the recent
developments on policy optimization, a gradient-based iterative approach for
feedback control synthesis, popularized by successes of reinforcement learning.
We take an interdisciplinary perspective in our exposition that connects
control theory, reinforcement learning, and large-scale optimization. We review
a number of recently-developed theoretical results on the optimization
landscape, global convergence, and sample complexity of gradient-based methods
for various continuous control problems such as the linear quadratic regulator
(LQR), control, risk-sensitive control, linear quadratic
Gaussian (LQG) control, and output feedback synthesis. In conjunction with
these optimization results, we also discuss how direct policy optimization
handles stability and robustness concerns in learning-based control, two main
desiderata in control engineering. We conclude the survey by pointing out
several challenges and opportunities at the intersection of learning and
control.Comment: To Appear in Annual Review of Control, Robotics, and Autonomous
System
Hessian barrier algorithms for linearly constrained optimization problems
In this paper, we propose an interior-point method for linearly constrained
optimization problems (possibly nonconvex). The method - which we call the
Hessian barrier algorithm (HBA) - combines a forward Euler discretization of
Hessian Riemannian gradient flows with an Armijo backtracking step-size policy.
In this way, HBA can be seen as an alternative to mirror descent (MD), and
contains as special cases the affine scaling algorithm, regularized Newton
processes, and several other iterative solution methods. Our main result is
that, modulo a non-degeneracy condition, the algorithm converges to the
problem's set of critical points; hence, in the convex case, the algorithm
converges globally to the problem's minimum set. In the case of linearly
constrained quadratic programs (not necessarily convex), we also show that the
method's convergence rate is for some
that depends only on the choice of kernel function (i.e., not on the problem's
primitives). These theoretical results are validated by numerical experiments
in standard non-convex test functions and large-scale traffic assignment
problems.Comment: 27 pages, 6 figure
- …