184 research outputs found
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
The affine rank minimization problem consists of finding a matrix of minimum
rank that satisfies a given system of linear equality constraints. Such
problems have appeared in the literature of a diverse set of fields including
system identification and control, Euclidean embedding, and collaborative
filtering. Although specific instances can often be solved with specialized
algorithms, the general affine rank minimization problem is NP-hard. In this
paper, we show that if a certain restricted isometry property holds for the
linear transformation defining the constraints, the minimum rank solution can
be recovered by solving a convex optimization problem, namely the minimization
of the nuclear norm over the given affine space. We present several random
ensembles of equations where the restricted isometry property holds with
overwhelming probability. The techniques used in our analysis have strong
parallels in the compressed sensing framework. We discuss how affine rank
minimization generalizes this pre-existing concept and outline a dictionary
relating concepts from cardinality minimization to those of rank minimization
Set optimization - a rather short introduction
Recent developments in set optimization are surveyed and extended including
various set relations as well as fundamental constructions of a convex analysis
for set- and vector-valued functions, and duality for set optimization
problems. Extensive sections with bibliographical comments summarize the state
of the art. Applications to vector optimization and financial risk measures are
discussed along with algorithmic approaches to set optimization problems
A simple view on convex analysis and its applications
Our aim is to give a simple view on the basics and applications of convex analysis. The essential feature of this account is the systematic use of the possibility to associate to each convex object---such as a convex set, a convex function or a convex extremal problem--- a cone, without loss of information. The core of convex analysis is the possibility of the dual description of convex objects, geometrical and algebraical, based on the duality of vectorspaces; for each type of convex objects, this property is encoded in an operator of duality, and the name of the game is how to calculate these operators. The core of this paper is a unified presentation, for each type of convex objects, of the duality theorem and the complete list of calculus rules.
Now we enumerate the advantages of the `cone'-approach. It gives a unified and transparent view on the subject. The intricate rules of the convex calculus all flow naturally from one common source. We have included for each rule a precise description of the weakest convenient assumption under which it is valid. This appears to be useful for applications; however, these assumptioons are usually not given. We explain why certain convex objects have to be excluded in the definition of the operators of duality: the collections of associated cones of the target of an operator of duality need not be closed (here `closed' is meant in an algebraic sense). This makes clear that the remedy is to take the closure of the target. As a byproduct of the cone approach, we have found the solution of the open problem of how to use the polar operation to give a dual description of arbitrary convex sets.
The approach given can be extended to the infinite-dimensional case
An Introduction to A Class of Matrix Optimization Problems
Ph.DDOCTOR OF PHILOSOPH
Variational Properties of Decomposable Functions Part II: Strong Second-Order Theory
Local superlinear convergence of the semismooth Newton method usually
requires the uniform invertibility of the generalized Jacobian matrix, e.g.
BD-regularity or CD-regularity. For several types of nonlinear programming and
composite-type optimization problems -- for which the generalized Jacobian of
the stationary equation can be calculated explicitly -- this is characterized
by the strong second-order sufficient condition. However, general
characterizations are still not well understood. In this paper, we propose a
strong second-order sufficient condition (SSOSC) for composite problems whose
nonsmooth part has a generalized conic-quadratic second subderivative. We then
discuss the relationship between the SSOSC and another second order-type
condition that involves the generalized Jacobians of the normal map. In
particular, these two conditions are equivalent under certain structural
assumptions on the generalized Jacobian matrix of the proximity operator. Next,
we verify these structural assumptions for -strictly decomposable
functions via analyzing their second-order variational properties under
additional geometric assumptions on the support set of the decomposition pair.
Finally, we show that the SSOSC is further equivalent to the strong metric
regularity condition of the subdifferential, the normal map, and the natural
residual. Counterexamples illustrate the necessity of our assumptions.Comment: 28 pages; preliminary draf
Low rank representations of matrices using nuclear norm heuristics
2014 Summer.The pursuit of low dimensional structure from high dimensional data leads in many instances to the finding the lowest rank matrix among a parameterized family of matrices. In its most general setting, this problem is NP-hard. Different heuristics have been introduced for approaching the problem. Among them is the nuclear norm heuristic for rank minimization. One aspect of this thesis is the application of the nuclear norm heuristic to the Euclidean distance matrix completion problem. As a special case, the approach is applied to the graph embedding problem. More generally, semi-definite programming, convex optimization, and the nuclear norm heuristic are applied to the graph embedding problem in order to extract invariants such as the chromatic number, Rn embeddability, and Borsuk-embeddability. In addition, we apply related techniques to decompose a matrix into components which simultaneously minimize a linear combination of the nuclear norm and the spectral norm. In the case when the Euclidean distance matrix is the distance matrix for a complete k-partite graph it is shown that the nuclear norm of the associated positive semidefinite matrix can be evaluated in terms of the second elementary symmetric polynomial evaluated at the partition. We prove that for k-partite graphs the maximum value of the nuclear norm of the associated positive semidefinite matrix is attained in the situation when we have equal number of vertices in each set of the partition. We use this result to determine a lower bound on the chromatic number of the graph. Finally, we describe a convex optimization approach to decomposition of a matrix into two components using the nuclear norm and spectral norm
- …