553 research outputs found
Relax, no need to round: integrality of clustering formulations
We study exact recovery conditions for convex relaxations of point cloud
clustering problems, focusing on two of the most common optimization problems
for unsupervised clustering: -means and -median clustering. Motivations
for focusing on convex relaxations are: (a) they come with a certificate of
optimality, and (b) they are generic tools which are relatively parameter-free,
not tailored to specific assumptions over the input. More precisely, we
consider the distributional setting where there are clusters in
and data from each cluster consists of points sampled from a
symmetric distribution within a ball of unit radius. We ask: what is the
minimal separation distance between cluster centers needed for convex
relaxations to exactly recover these clusters as the optimal integral
solution? For the -median linear programming relaxation we show a tight
bound: exact recovery is obtained given arbitrarily small pairwise separation
between the balls. In other words, the pairwise center
separation is . Under the same distributional model, the
-means LP relaxation fails to recover such clusters at separation as large
as . Yet, if we enforce PSD constraints on the -means LP, we get
exact cluster recovery at center separation .
In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the -means
algorithm) can fail to recover clusters in this setting; even with arbitrarily
large cluster separation, k-means++ with overseeding by any constant factor
fails with high probability at exact cluster recovery. To complement the
theoretical analysis, we provide an experimental study of the recovery
guarantees for these various methods, and discuss several open problems which
these experiments suggest.Comment: 30 pages, ITCS 201
Recovery under Side Constraints
This paper addresses sparse signal reconstruction under various types of
structural side constraints with applications in multi-antenna systems. Side
constraints may result from prior information on the measurement system and the
sparse signal structure. They may involve the structure of the sensing matrix,
the structure of the non-zero support values, the temporal structure of the
sparse representationvector, and the nonlinear measurement structure. First, we
demonstrate how a priori information in form of structural side constraints
influence recovery guarantees (null space properties) using L1-minimization.
Furthermore, for constant modulus signals, signals with row-, block- and
rank-sparsity, as well as non-circular signals, we illustrate how structural
prior information can be used to devise efficient algorithms with improved
recovery performance and reduced computational complexity. Finally, we address
the measurement system design for linear and nonlinear measurements of sparse
signals. Moreover, we discuss the linear mixing matrix design based on
coherence minimization. Then we extend our focus to nonlinear measurement
systems where we design parallel optimization algorithms to efficiently compute
stationary points in the sparse phase retrieval problem with and without
dictionary learning
The power of sum-of-squares for detecting hidden structures
We study planted problems---finding hidden structures in random noisy
inputs---through the lens of the sum-of-squares semidefinite programming
hierarchy (SoS). This family of powerful semidefinite programs has recently
yielded many new algorithms for planted problems, often achieving the best
known polynomial-time guarantees in terms of accuracy of recovered solutions
and robustness to noise. One theme in recent work is the design of spectral
algorithms which match the guarantees of SoS algorithms for planted problems.
Classical spectral algorithms are often unable to accomplish this: the twist in
these new spectral algorithms is the use of spectral structure of matrices
whose entries are low-degree polynomials of the input variables. We prove that
for a wide class of planted problems, including refuting random constraint
satisfaction problems, tensor and sparse PCA, densest-k-subgraph, community
detection in stochastic block models, planted clique, and others, eigenvalues
of degree-d matrix polynomials are as powerful as SoS semidefinite programs of
roughly degree d. For such problems it is therefore always possible to match
the guarantees of SoS without solving a large semidefinite program. Using
related ideas on SoS algorithms and low-degree matrix polynomials (and inspired
by recent work on SoS and the planted clique problem by Barak et al.), we prove
new nearly-tight SoS lower bounds for the tensor and sparse principal component
analysis problems. Our lower bounds for sparse principal component analysis are
the first to suggest that going beyond existing algorithms for this problem may
require sub-exponential time
Hypergraphic LP Relaxations for Steiner Trees
We investigate hypergraphic LP relaxations for the Steiner tree problem,
primarily the partition LP relaxation introduced by Koenemann et al. [Math.
Programming, 2009]. Specifically, we are interested in proving upper bounds on
the integrality gap of this LP, and studying its relation to other linear
relaxations. Our results are the following. Structural results: We extend the
technique of uncrossing, usually applied to families of sets, to families of
partitions. As a consequence we show that any basic feasible solution to the
partition LP formulation has sparse support. Although the number of variables
could be exponential, the number of positive variables is at most the number of
terminals. Relations with other relaxations: We show the equivalence of the
partition LP relaxation with other known hypergraphic relaxations. We also show
that these hypergraphic relaxations are equivalent to the well studied
bidirected cut relaxation, if the instance is quasibipartite. Integrality gap
upper bounds: We show an upper bound of sqrt(3) ~ 1.729 on the integrality gap
of these hypergraph relaxations in general graphs. In the special case of
uniformly quasibipartite instances, we show an improved upper bound of 73/60 ~
1.216. By our equivalence theorem, the latter result implies an improved upper
bound for the bidirected cut relaxation as well.Comment: Revised full version; a shorter version will appear at IPCO 2010
Sum-of-squares proofs and the quest toward optimal algorithms
In order to obtain the best-known guarantees, algorithms are traditionally
tailored to the particular problem we want to solve. Two recent developments,
the Unique Games Conjecture (UGC) and the Sum-of-Squares (SOS) method,
surprisingly suggest that this tailoring is not necessary and that a single
efficient algorithm could achieve best possible guarantees for a wide range of
different problems.
The Unique Games Conjecture (UGC) is a tantalizing conjecture in
computational complexity, which, if true, will shed light on the complexity of
a great many problems. In particular this conjecture predicts that a single
concrete algorithm provides optimal guarantees among all efficient algorithms
for a large class of computational problems.
The Sum-of-Squares (SOS) method is a general approach for solving systems of
polynomial constraints. This approach is studied in several scientific
disciplines, including real algebraic geometry, proof complexity, control
theory, and mathematical programming, and has found applications in fields as
diverse as quantum information theory, formal verification, game theory and
many others.
We survey some connections that were recently uncovered between the Unique
Games Conjecture and the Sum-of-Squares method. In particular, we discuss new
tools to rigorously bound the running time of the SOS method for obtaining
approximate solutions to hard optimization problems, and how these tools give
the potential for the sum-of-squares method to provide new guarantees for many
problems of interest, and possibly to even refute the UGC.Comment: Survey. To appear in proceedings of ICM 201
Revisiting minimum profit conditions in uniform price day-ahead electricity auctions
We examine the problem of clearing day-ahead electricity market auctions
where each bidder, whether a producer or consumer, can specify a minimum profit
or maximum payment condition constraining the acceptance of a set of bid curves
spanning multiple time periods in locations connected through a transmission
network with linear constraints. Such types of conditions are for example
considered in the Spanish and Portuguese day-ahead markets. This helps
describing the recovery of start-up costs of a power plant, or analogously for
a large consumer, utility reduced by a constant term. A new market model is
proposed with a corresponding MILP formulation for uniform locational price
day-ahead auctions, handling bids with a minimum profit or maximum payment
condition in a uniform and computationally-efficient way. An exact
decomposition procedure with sparse strengthened Benders cuts derived from the
MILP formulation is also proposed. The MILP formulation and the decomposition
procedure are similar to computationally-efficient approaches previously
proposed to handle so-called block bids according to European market rules,
though the clearing conditions could appear different at first sight. Both
solving approaches are also valid to deal with both kinds of bids
simultaneously, as block bids with a minimum acceptance ratio, generalizing
fully indivisible block bids, are but a special case of the MP bids introduced
here. We argue in favour of the MP bids by comparing them to previous models
for minimum profit conditions proposed in the academic literature, and to the
model for minimum income conditions used by the Spanish power exchange OMIE
- …