222 research outputs found
Computing quasisolutions of nonlinear inverse problems via efficient minimization of trust region problems
In this paper we present a method for the regularized solution of nonlinear
inverse problems, based on Ivanov regularization (also called method of quasi
solutions or constrained least squares regularization). This leads to the
minimization of a non-convex cost function under a norm constraint, where
non-convexity is caused by nonlinearity of the inverse problem. Minimization is
done by iterative approximation, using (non-convex) quadratic Taylor expansions
of the cost function. This leads to repeated solution of quadratic trust region
subproblems with possibly indefinite Hessian. Thus the key step of the method
consists in application of an efficient method for solving such quadratic
subproblems, developed by Rendl and Wolkowicz [10]. We here present a
convergence analysis of the overall method as well as numerical experiments
Local Nonglobal Minima for Solving Large Scale Extended Trust Region Subproblems
We study large scale extended trust region subproblems (eTRS) i.e., the
minimization of a general quadratic function subject to a norm constraint,
known as the trust region subproblem (TRS) but with an additional linear
inequality constraint. It is well known that strong duality holds for the TRS
and that there are efficient algorithms for solving large scale TRS problems.
It is also known that there can exist at most one local non-global minimizer
(LNGM) for TRS. We combine this with known characterizations for strong duality
for eTRS and, in particular, connect this with the so-called hard case for TRS.
We begin with a recent characterization of the minimum for the TRS via a
generalized eigenvalue problem and extend this result to the LNGM. We then use
this to derive an efficient algorithm that finds the global minimum for eTRS by
solving at most three generalized eigenvalue problems.Comment: 25 pages including table of contents and index; 8 table
A Second-Order Cone Based Approach for Solving the Trust Region Subproblem and Its Variants
We study the trust-region subproblem (TRS) of minimizing a nonconvex
quadratic function over the unit ball with additional conic constraints.
Despite having a nonconvex objective, it is known that the classical TRS and a
number of its variants are polynomial-time solvable. In this paper, we follow a
second-order cone (SOC) based approach to derive an exact convex reformulation
of the TRS under a structural condition on the conic constraint. Our structural
condition is immediately satisfied when there is no additional conic
constraints, and it generalizes several such conditions studied in the
literature. As a result, our study highlights an explicit connection between
the classical nonconvex TRS and smooth convex quadratic minimization, which
allows for the application of cheap iterative methods such as Nesterov's
accelerated gradient descent, to the TRS. Furthermore, under slightly stronger
conditions, we give a low-complexity characterization of the convex hull of the
epigraph of the nonconvex quadratic function intersected with the constraints
defining the domain without any additional variables. We also explore the
inclusion of additional hollow constraints to the domain of the TRS, and
convexification of the associated epigraph
Generalized Low-Rank Optimization for Topological Cooperation in Ultra-Dense Networks
Network densification is a natural way to support dense mobile applications
under stringent requirements, such as ultra-low latency, ultra-high data rate,
and massive connecting devices. Severe interference in ultra-dense networks
poses a key bottleneck. Sharing channel state information (CSI) and messages
across transmitters can potentially alleviate interferences and improve system
performance. Most existing works on interference coordination require
significant CSI signaling overhead and are impractical in ultra-dense networks.
This paper investigate topological cooperation to manage interferences in
message sharing based only on network connectivity information. In particular,
we propose a generalized low-rank optimization approach to maximize achievable
degrees-of-freedom (DoFs). To tackle the challenges of poor structure and
non-convex rank function, we develop Riemannian optimization algorithms to
solve a sequence of complex fixed rank subproblems through a rank growth
strategy. By exploiting the non-compact Stiefel manifold formed by the set of
complex full column rank matrices, we develop Riemannian optimization
algorithms to solve the complex fixed-rank optimization problem by applying the
semidefinite lifting technique and Burer-Monteiro factorization approach.
Numerical results demonstrate the computational efficiency and higher DoFs
achieved by the proposed algorithms
A survey of hidden convex optimization
Motivated by the fact that not all nonconvex optimization problems are
difficult to solve, we survey in this paper three widely-used ways to reveal
the hidden convex structure for different classes of nonconvex optimization
problems. Finally, ten open problems are raised.Comment: 25 page
Sequential Convex Programming Methods for Solving Nonlinear Optimization Problems with DC constraints
This paper investigates the relation between sequential convex programming
(SCP) as, e.g., defined in [24] and DC (difference of two convex functions)
programming. We first present an SCP algorithm for solving nonlinear
optimization problems with DC constraints and prove its convergence. Then we
combine the proposed algorithm with a relaxation technique to handle
inconsistent linearizations. Numerical tests are performed to investigate the
behaviour of the class of algorithms.Comment: 18 pages, 1 figur
Novel reformulations and efficient algorithms for the generalized trust region subproblem
We present a new solution framework to solve the generalized trust region
subproblem (GTRS) of minimizing a quadratic objective over a quadratic
constraint. More specifically, we derive a convex quadratic reformulation (CQR)
via minimizing a linear objective over two convex quadratic constraints for the
GTRS. We show that an optimal solution of the GTRS can be recovered from an
optimal solution of the CQR. We further prove that this CQR is equivalent to
minimizing the maximum of the two convex quadratic functions derived from the
CQR for the case under our investigation. Although the latter minimax problem
is nonsmooth, it is well-structured and convex. We thus develop two steepest
descent algorithms corresponding to two different line search rules. We prove
for both algorithms their global sublinear convergence rates. We also obtain a
local linear convergence rate of the first algorithm by estimating the Kurdyka-
Lojasiewicz exponent at any optimal solution under mild conditions. We finally
demonstrate the efficiency of our algorithms in our numerical experiments
Uniform Quadratic Optimization and Extensions
The uniform quadratic optimizatin problem (UQ) is a nonconvex quadratic
constrained quadratic programming (QCQP) sharing the same Hessian matrix. Based
on the second-order cone programming (SOCP) relaxation, we establish a new
sufficient condition to guarantee strong duality for (UQ) and then extend it to
(QCQP), which not only covers several well-known results in literature but also
partially gives answers to a few open questions. For convex constrained
nonconvex (UQ), we propose an improved approximation algorithm based on (SOCP).
Our approximation bound is dimensional independent. As an application, we
establish the first approximation bound for the problem of finding the
Chebyshev center of the intersection of several balls.Comment: 28 page
Algorithm 873: LSTRS: MATLAB Software for Large-Scale Trust-Region Subproblems and Regularization
A MATLAB 6.0 implementation of the LSTRS method is presented. LSTRS was described in Rojas et al. [2000]. LSTRS is designed for large-scale quadratic problems with one norm constraint. The method is based on a reformulation of the trust-region subproblem as a parameterized eigenvalue problem, and consists of an iterative procedure that finds the optimal value for the parameter. The adjustment of the parameter requires the solution of a large-scale eigenvalue problem at each step. LSTRS relies on matrix-vector products only and has low and fixed storage requirements, features that make it suitable for large-scale computations. In the MATLAB implementation, the Hessian matrix of the quadratic objective function can be specified either explicitly, or in the form of a matrix-vector multiplication routine. Therefore, the implementation preserves the matrix-free nature of the method. A description of the LSTRS method and of the MATLAB software, version 1.2, is presented. Comparisons with other techniques and applications of the method are also included. A guide for using the software and examples are provided.34
Introduction to Nonnegative Matrix Factorization
In this paper, we introduce and provide a short overview of nonnegative
matrix factorization (NMF). Several aspects of NMF are discussed, namely, the
application in hyperspectral imaging, geometry and uniqueness of NMF solutions,
complexity, algorithms, and its link with extended formulations of polyhedra.
In order to put NMF into perspective, the more general problem class of
constrained low-rank matrix approximation problems is first briefly introduced.Comment: 18 pages, 4 figure
- …