28 research outputs found
Optimal Column-Based Low-Rank Matrix Reconstruction
We prove that for any real-valued matrix , and
positive integers , there is a subset of columns of such that
projecting onto their span gives a -approximation
to best rank- approximation of in Frobenius norm. We show that the
trade-off we achieve between the number of columns and the approximation ratio
is optimal up to lower order terms. Furthermore, there is a deterministic
algorithm to find such a subset of columns that runs in arithmetic operations where is the exponent of matrix
multiplication. We also give a faster randomized algorithm that runs in arithmetic operations.Comment: 8 page
How to Round Subspaces: A New Spectral Clustering Algorithm
A basic problem in spectral clustering is the following. If a solution
obtained from the spectral relaxation is close to an integral solution, is it
possible to find this integral solution even though they might be in completely
different basis? In this paper, we propose a new spectral clustering algorithm.
It can recover a -partition such that the subspace corresponding to the span
of its indicator vectors is close to the original subspace in
spectral norm with being the minimum possible ( always).
Moreover our algorithm does not impose any restriction on the cluster sizes.
Previously, no algorithm was known which could find a -partition closer than
.
We present two applications for our algorithm. First one finds a disjoint
union of bounded degree expanders which approximate a given graph in spectral
norm. The second one is for approximating the sparsest -partition in a graph
where each cluster have expansion at most provided where is the eigenvalue of
Laplacian matrix. This significantly improves upon the previous algorithms,
which required .Comment: Appeared in SODA 201
Improved Inapproximability Results for Maximum k-Colorable Subgraph
We study the maximization version of the fundamental graph coloring problem.
Here the goal is to color the vertices of a k-colorable graph with k colors so
that a maximum fraction of edges are properly colored (i.e. their endpoints
receive different colors). A random k-coloring properly colors an expected
fraction 1-1/k of edges. We prove that given a graph promised to be
k-colorable, it is NP-hard to find a k-coloring that properly colors more than
a fraction ~1-O(1/k} of edges. Previously, only a hardness factor of 1-O(1/k^2)
was known. Our result pins down the correct asymptotic dependence of the
approximation factor on k. Along the way, we prove that approximating the
Maximum 3-colorable subgraph problem within a factor greater than 32/33 is
NP-hard. Using semidefinite programming, it is known that one can do better
than a random coloring and properly color a fraction 1-1/k +2 ln k/k^2 of edges
in polynomial time. We show that, assuming the 2-to-1 conjecture, it is hard to
properly color (using k colors) more than a fraction 1-1/k + O(ln k/ k^2) of
edges of a k-colorable graph.Comment: 16 pages, 2 figure
Approximating Non-Uniform Sparsest Cut via Generalized Spectra
We give an approximation algorithm for non-uniform sparsest cut with the
following guarantee: For any , given cost and demand
graphs with edge weights respectively, we can find a set
with at most
times the optimal non-uniform sparsest cut value,
in time 2^{r/(\delta\epsilon)}\poly(n) provided . Here is the 'th smallest generalized
eigenvalue of the Laplacian matrices of cost and demand graphs; (resp. ) is the weight of edges crossing the
cut in cost (resp. demand) graph and is the
sparsity of the optimal cut. In words, we show that the non-uniform sparsest
cut problem is easy when the generalized spectrum grows moderately fast. To the
best of our knowledge, there were no results based on higher order spectra for
non-uniform sparsest cut prior to this work.
Even for uniform sparsest cut, the quantitative aspects of our result are
somewhat stronger than previous methods. Similar results hold for other
expansion measures like edge expansion, normalized cut, and conductance, with
the 'th smallest eigenvalue of the normalized Laplacian playing the role of
in the latter two cases.
Our proof is based on an l1-embedding of vectors from a semi-definite program
from the Lasserre hierarchy. The embedded vectors are then rounded to a cut
using standard threshold rounding. We hope that the ideas connecting
-embeddings to Lasserre SDPs will find other applications. Another
aspect of the analysis is the adaptation of the column selection paradigm from
our earlier work on rounding Lasserre SDPs [GS11] to pick a set of edges rather
than vertices. This feature is important in order to extend the algorithms to
non-uniform sparsest cut.Comment: 16 page
Faster SDP hierarchy solvers for local rounding algorithms
Convex relaxations based on different hierarchies of linear/semi-definite
programs have been used recently to devise approximation algorithms for various
optimization problems. The approximation guarantee of these algorithms improves
with the number of {\em rounds} in the hierarchy, though the complexity of
solving (or even writing down the solution for) the 'th level program grows
as where is the input size.
In this work, we observe that many of these algorithms are based on {\em
local} rounding procedures that only use a small part of the SDP solution (of
size instead of ). We give an algorithm to
find the requisite portion in time polynomial in its size. The challenge in
achieving this is that the required portion of the solution is not fixed a
priori but depends on other parts of the solution, sometimes in a complicated
iterative manner.
Our solver leads to time algorithms to obtain the same
guarantees in many cases as the earlier time algorithms based on
rounds of the Lasserre hierarchy. In particular, guarantees based on rounds can be realized in polynomial time.
We develop and describe our algorithm in a fairly general abstract framework.
The main technical tool in our work, which might be of independent interest in
convex optimization, is an efficient ellipsoid algorithm based separation
oracle for convex programs that can output a {\em certificate of infeasibility
with restricted support}. This is used in a recursive manner to find a sequence
of consistent points in nested convex bodies that "fools" local rounding
algorithms.Comment: 30 pages, 8 figure
Towards a better approximation for sparsest cut?
We give a new -approximation for sparsest cut problem on graphs
where small sets expand significantly more than the sparsest cut (sets of size
expand by a factor bigger, for some small ; this
condition holds for many natural graph families). We give two different
algorithms. One involves Guruswami-Sinop rounding on the level- Lasserre
relaxation. The other is combinatorial and involves a new notion called {\em
Small Set Expander Flows} (inspired by the {\em expander flows} of ARV) which
we show exists in the input graph. Both algorithms run in time . We also show similar approximation algorithms in graphs with
genus with an analogous local expansion condition. This is the first
algorithm we know of that achieves -approximation on such general
family of graphs
The Hardness of Approximation of Euclidean k-means
The Euclidean -means problem is a classical problem that has been
extensively studied in the theoretical computer science, machine learning and
the computational geometry communities. In this problem, we are given a set of
points in Euclidean space , and the goal is to choose centers in
so that the sum of squared distances of each point to its nearest center
is minimized. The best approximation algorithms for this problem include a
polynomial time constant factor approximation for general and a
-approximation which runs in time . At
the other extreme, the only known computational complexity result for this
problem is NP-hardness [ADHP'09]. The main difficulty in obtaining hardness
results stems from the Euclidean nature of the problem, and the fact that any
point in can be a potential center. This gap in understanding left open
the intriguing possibility that the problem might admit a PTAS for all .
In this paper we provide the first hardness of approximation for the
Euclidean -means problem. Concretely, we show that there exists a constant
such that it is NP-hard to approximate the -means objective
to within a factor of . We show this via an efficient reduction
from the vertex cover problem on triangle-free graphs: given a triangle-free
graph, the goal is to choose the fewest number of vertices which are incident
on all the edges. Additionally, we give a proof that the current best hardness
results for vertex cover can be carried over to triangle-free graphs. To show
this we transform , a known hard vertex cover instance, by taking a graph
product with a suitably chosen graph , and showing that the size of the
(normalized) maximum independent set is almost exactly preserved in the product
graph using a spectral analysis, which might be of independent interest
Spectrally Robust Graph Isomorphism
We initiate the study of spectral generalizations of the graph isomorphism problem.
b) The Spectral Graph Dominance (SGD) problem: On input of two graphs G and H does there exist a permutation pi such that G preceq pi(H)?
c) The Spectrally Robust Graph Isomorphism (kappa-SRGI) problem: On input of two graphs G and H, find the smallest number kappa over all permutations pi such that pi(H) preceq G preceq kappa c pi(H) for some c. SRGI is a natural formulation of the network alignment problem that has various applications, most notably in computational biology.
G preceq c H means that for all vectors x we have x^T L_G x <= c x^T L_H x, where L_G is the Laplacian G.
We prove NP-hardness for SGD. We also present a kappa^3-approximation algorithm for SRGI for the case when both G and H are bounded-degree trees. The algorithm runs in polynomial time when kappa is a constant