23 research outputs found
Efficient Rounding for the Noncommutative Grothendieck Inequality
The classical Grothendieck inequality has applications to the design of approximation algorithms for NP-hard optimization problems. We show that an algorithmic interpretation may also be given for a noncommutative generalization of the Grothendieck inequality due to Pisier and Haagerup. Our main result, an efficient rounding procedure for this inequality, leads to a constant-factor polynomial time approximation algorithm for an optimization problem which generalizes the Cut Norm problem of Frieze and Kannan, and is shown here to have additional applications to robust principle component analysis and the orthogonal Procrustes problem
Failure of the trilinear operator space Grothendieck theorem
We give a counterexample to a trilinear version of the operator space
Grothendieck theorem. In particular, we show that for trilinear forms on
, the ratio of the symmetrized completely bounded norm and the
jointly completely bounded norm is in general unbounded, answering a question
of Pisier. The proof is based on a non-commutative version of the generalized
von Neumann inequality from additive combinatorics.Comment: Reformatted for Discrete Analysi
Disentangling Orthogonal Matrices
Motivated by a certain molecular reconstruction methodology in cryo-electron
microscopy, we consider the problem of solving a linear system with two unknown
orthogonal matrices, which is a generalization of the well-known orthogonal
Procrustes problem. We propose an algorithm based on a semi-definite
programming (SDP) relaxation, and give a theoretical guarantee for its
performance. Both theoretically and empirically, the proposed algorithm
performs better than the na\"{i}ve approach of solving the linear system
directly without the orthogonal constraints. We also consider the
generalization to linear systems with more than two unknown orthogonal
matrices
Algorithms and Hardness for Robust Subspace Recovery
We consider a fundamental problem in unsupervised learning called
\emph{subspace recovery}: given a collection of points in ,
if many but not necessarily all of these points are contained in a
-dimensional subspace can we find it? The points contained in are
called {\em inliers} and the remaining points are {\em outliers}. This problem
has received considerable attention in computer science and in statistics. Yet
efficient algorithms from computer science are not robust to {\em adversarial}
outliers, and the estimators from robust statistics are hard to compute in high
dimensions.
Are there algorithms for subspace recovery that are both robust to outliers
and efficient? We give an algorithm that finds when it contains more than a
fraction of the points. Hence, for say this estimator
is both easy to compute and well-behaved when there are a constant fraction of
outliers. We prove that it is Small Set Expansion hard to find when the
fraction of errors is any larger, thus giving evidence that our estimator is an
{\em optimal} compromise between efficiency and robustness.
As it turns out, this basic problem has a surprising number of connections to
other areas including small set expansion, matroid theory and functional
analysis that we make use of here.Comment: Appeared in Proceedings of COLT 201
Semidefinite descriptions of the convex hull of rotation matrices
We study the convex hull of , thought of as the set of
orthogonal matrices with unit determinant, from the point of view of
semidefinite programming. We show that the convex hull of is doubly
spectrahedral, i.e. both it and its polar have a description as the
intersection of a cone of positive semidefinite matrices with an affine
subspace. Our spectrahedral representations are explicit, and are of minimum
size, in the sense that there are no smaller spectrahedral representations of
these convex bodies.Comment: 29 pages, 1 figur