776 research outputs found
Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting
In this paper we establish links between, and new results for, three problems
that are not usually considered together. The first is a matrix decomposition
problem that arises in areas such as statistical modeling and signal
processing: given a matrix formed as the sum of an unknown diagonal matrix
and an unknown low rank positive semidefinite matrix, decompose into these
constituents. The second problem we consider is to determine the facial
structure of the set of correlation matrices, a convex set also known as the
elliptope. This convex body, and particularly its facial structure, plays a
role in applications from combinatorial optimization to mathematical finance.
The third problem is a basic geometric question: given points
(where ) determine whether there is a centered
ellipsoid passing \emph{exactly} through all of the points.
We show that in a precise sense these three problems are equivalent.
Furthermore we establish a simple sufficient condition on a subspace that
ensures any positive semidefinite matrix with column space can be
recovered from for any diagonal matrix using a convex
optimization-based heuristic known as minimum trace factor analysis. This
result leads to a new understanding of the structure of rank-deficient
correlation matrices and a simple condition on a set of points that ensures
there is a centered ellipsoid passing through them.Comment: 20 page
A Constant-Factor Approximation for Multi-Covering with Disks
We consider variants of the following multi-covering problem with disks. We
are given two point sets (servers) and (clients) in the plane, a
coverage function , and a constant . Centered at each server is a single disk whose radius we are free to
set. The requirement is that each client be covered by at least
of the server disks. The objective function we wish to minimize is
the sum of the -th powers of the disk radii. We present a polynomial
time algorithm for this problem achieving an approximation
Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations
It is well-known that any sum of squares (SOS) program can be cast as a
semidefinite program (SDP) of a particular structure and that therein lies the
computational bottleneck for SOS programs, as the SDPs generated by this
procedure are large and costly to solve when the polynomials involved in the
SOS programs have a large number of variables and degree. In this paper, we
review SOS optimization techniques and present two new methods for improving
their computational efficiency. The first method leverages the sparsity of the
underlying SDP to obtain computational speed-ups. Further improvements can be
obtained if the coefficients of the polynomials that describe the problem have
a particular sparsity pattern, called chordal sparsity. The second method
bypasses semidefinite programming altogether and relies instead on solving a
sequence of more tractable convex programs, namely linear and second order cone
programs. This opens up the question as to how well one can approximate the
cone of SOS polynomials by second order representable cones. In the last part
of the paper, we present some recent negative results related to this question.Comment: Tutorial for CDC 201
Game Efficiency Through Linear Programming Duality
The efficiency of a game is typically quantified by the price of anarchy (PoA), defined as the worst ratio of the value of an equilibrium - solution of the game - and that of an optimal outcome. Given the tremendous impact of tools from mathematical programming in the design of algorithms and the similarity of the price of anarchy and different measures such as the approximation and competitive ratios, it is intriguing to develop a duality-based method to characterize the efficiency of games.
In the paper, we present an approach based on linear programming duality to study the efficiency of games. We show that the approach provides a general recipe to analyze the efficiency of games and also to derive concepts leading to improvements. The approach is particularly appropriate to bound the PoA. Specifically, in our approach the dual programs naturally lead to competitive PoA bounds that are (almost) optimal for several classes of games. The approach indeed captures the smoothness framework and also some current non-smooth techniques/concepts. We show the applicability to the wide variety of games and environments, from congestion games to Bayesian welfare, from full-information settings to incomplete-information ones
Inverse Optimization: Closed-form Solutions, Geometry and Goodness of fit
In classical inverse linear optimization, one assumes a given solution is a
candidate to be optimal. Real data is imperfect and noisy, so there is no
guarantee this assumption is satisfied. Inspired by regression, this paper
presents a unified framework for cost function estimation in linear
optimization comprising a general inverse optimization model and a
corresponding goodness-of-fit metric. Although our inverse optimization model
is nonconvex, we derive a closed-form solution and present the geometric
intuition. Our goodness-of-fit metric, , the coefficient of
complementarity, has similar properties to from regression and is
quasiconvex in the input data, leading to an intuitive geometric
interpretation. While is computable in polynomial-time, we derive a
lower bound that possesses the same properties, is tight for several important
model variations, and is even easier to compute. We demonstrate the application
of our framework for model estimation and evaluation in production planning and
cancer therapy
The Metric Nearness Problem
Metric nearness refers to the problem of optimally restoring metric properties to
distance measurements that happen to be nonmetric due to measurement errors or otherwise. Metric
data can be important in various settings, for example, in clustering, classification, metric-based
indexing, query processing, and graph theoretic approximation algorithms. This paper formulates
and solves the metric nearness problem: Given a set of pairwise dissimilarities, find a ānearestā set
of distances that satisfy the properties of a metricāprincipally the triangle inequality. For solving
this problem, the paper develops efficient triangle fixing algorithms that are based on an iterative
projection method. An intriguing aspect of the metric nearness problem is that a special case turns
out to be equivalent to the all pairs shortest paths problem. The paper exploits this equivalence and
develops a new algorithm for the latter problem using a primal-dual method. Applications to graph
clustering are provided as an illustration. We include experiments that demonstrate the computational
superiority of triangle fixing over general purpose convex programming software. Finally, we
conclude by suggesting various useful extensions and generalizations to metric nearness
Approximate Clustering via Metric Partitioning
In this paper we consider two metric covering/clustering problems -
\textit{Minimum Cost Covering Problem} (MCC) and -clustering. In the MCC
problem, we are given two point sets (clients) and (servers), and a
metric on . We would like to cover the clients by balls centered at
the servers. The objective function to minimize is the sum of the -th
power of the radii of the balls. Here is a parameter of the
problem (but not of a problem instance). MCC is closely related to the
-clustering problem. The main difference between -clustering and MCC is
that in -clustering one needs to select balls to cover the clients.
For any \eps > 0, we describe quasi-polynomial time (1 + \eps)
approximation algorithms for both of the problems. However, in case of
-clustering the algorithm uses (1 + \eps)k balls. Prior to our work, a
and a approximation were achieved by
polynomial-time algorithms for MCC and -clustering, respectively, where is an absolute constant. These two problems are thus interesting examples of
metric covering/clustering problems that admit (1 + \eps)-approximation
(using (1+\eps)k balls in case of -clustering), if one is willing to
settle for quasi-polynomial time. In contrast, for the variant of MCC where
is part of the input, we show under standard assumptions that no
polynomial time algorithm can achieve an approximation factor better than
for .Comment: 19 page
- ā¦