24 research outputs found

    Clustering, Hamming Embedding, Generalized LSH and the Max Norm

    Full text link
    We study the convex relaxation of clustering and hamming embedding, focusing on the asymmetric case (co-clustering and asymmetric hamming embedding), understanding their relationship to LSH as studied by (Charikar 2002) and to the max-norm ball, and the differences between their symmetric and asymmetric versions.Comment: 17 page

    Simulating Quantum Correlations with Finite Communication

    Get PDF
    Assume Alice and Bob share some bipartite dd-dimensional quantum state. A well-known result in quantum mechanics says that by performing two-outcome measurements, Alice and Bob can produce correlations that cannot be obtained locally, i.e., with shared randomness alone. We show that by using only two bits of communication, Alice and Bob can classically simulate any such correlations. All previous protocols for exact simulation required the communication to grow to infinity with the dimension dd. Our protocol and analysis are based on a power series method, resembling Krivine's bound on Grothendieck's constant, and on the computation of volumes of spherical tetrahedra.Comment: 19 pages, 3 figures, preliminary version in IEEE FOCS 2007; to appear in SICOM

    Grothendieck inequalities for semidefinite programs with rank constraint

    Get PDF
    Grothendieck inequalities are fundamental inequalities which are frequently used in many areas of mathematics and computer science. They can be interpreted as upper bounds for the integrality gap between two optimization problems: a difficult semidefinite program with rank-1 constraint and its easy semidefinite relaxation where the rank constrained is dropped. For instance, the integrality gap of the Goemans-Williamson approximation algorithm for MAX CUT can be seen as a Grothendieck inequality. In this paper we consider Grothendieck inequalities for ranks greater than 1 and we give two applications: approximating ground states in the n-vector model in statistical mechanics and XOR games in quantum information theory.Comment: 22 page

    Computing the partition function of the Sherrington-Kirkpatrick model is hard on average

    Full text link
    We establish the average-case hardness of the algorithmic problem of exact computation of the partition function associated with the Sherrington-Kirkpatrick model of spin glasses with Gaussian couplings and random external field. In particular, we establish that unless P=#PP= \#P, there does not exist a polynomial-time algorithm to exactly compute the partition function on average. This is done by showing that if there exists a polynomial time algorithm, which exactly computes the partition function for inverse polynomial fraction (1/nO(1)1/n^{O(1)}) of all inputs, then there is a polynomial time algorithm, which exactly computes the partition function for all inputs, with high probability, yielding P=#PP=\#P. The computational model that we adopt is {\em finite-precision arithmetic}, where the algorithmic inputs are truncated first to a certain level NN of digital precision. The ingredients of our proof include the random and downward self-reducibility of the partition function with random external field; an argument of Cai et al. \cite{cai1999hardness} for establishing the average-case hardness of computing the permanent of a matrix; a list-decoding algorithm of Sudan \cite{sudan1996maximum}, for reconstructing polynomials intersecting a given list of numbers at sufficiently many points; and near-uniformity of the log-normal distribution, modulo a large prime pp. To the best of our knowledge, our result is the first one establishing a provable hardness of a model arising in the field of spin glasses. Furthermore, we extend our result to the same problem under a different {\em real-valued} computational model, e.g. using a Blum-Shub-Smale machine \cite{blum1988theory} operating over real-valued inputs.Comment: 31 page

    On Quadratic Programming with a Ratio Objective

    Full text link
    Quadratic Programming (QP) is the well-studied problem of maximizing over {-1,1} values the quadratic form \sum_{i \ne j} a_{ij} x_i x_j. QP captures many known combinatorial optimization problems, and assuming the unique games conjecture, semidefinite programming techniques give optimal approximation algorithms. We extend this body of work by initiating the study of Quadratic Programming problems where the variables take values in the domain {-1,0,1}. The specific problems we study are QP-Ratio : \max_{\{-1,0,1\}^n} \frac{\sum_{i \not = j} a_{ij} x_i x_j}{\sum x_i^2}, and Normalized QP-Ratio : \max_{\{-1,0,1\}^n} \frac{\sum_{i \not = j} a_{ij} x_i x_j}{\sum d_i x_i^2}, where d_i = \sum_j |a_{ij}| We consider an SDP relaxation obtained by adding constraints to the natural eigenvalue (or SDP) relaxation for this problem. Using this, we obtain an O~(n1/3)\tilde{O}(n^{1/3}) algorithm for QP-ratio. We also obtain an O~(n1/4)\tilde{O}(n^{1/4}) approximation for bipartite graphs, and better algorithms for special cases. As with other problems with ratio objectives (e.g. uniform sparsest cut), it seems difficult to obtain inapproximability results based on P!=NP. We give two results that indicate that QP-Ratio is hard to approximate to within any constant factor. We also give a natural distribution on instances of QP-Ratio for which an n^\epsilon approximation (for \epsilon roughly 1/10) seems out of reach of current techniques

    Tight bounds for parameterized complexity of Cluster Editing

    Get PDF
    In the Correlation Clustering problem, also known as Cluster Editing, we are given an undirected graph G and a positive integer k; the task is to decide whether G can be transformed into a cluster graph, i.e., a disjoint union of cliques, by changing at most k adjacencies, that is, by adding or deleting at most k edges. The motivation of the problem stems from various tasks in computational biology (Ben-Dor et al., Journal of Computational Biology 1999) and machine learning (Bansal et al., Machine Learning 2004). Although in general Correlation Clustering is APX-hard (Charikar et al., FOCS 2003), the version of the problem where the number of cliques may not exceed a prescribed constant p admits a PTAS (Giotis and Guruswami, SODA 2006). We study the parameterized complexity of Correlation Clustering with this restriction on the number of cliques to be created. We give an algorithm that - in time O(2^{O(sqrt{pk})} + n+m) decides whether a graph G on n vertices and m edges can be transformed into a cluster graph with exactly p cliques by changing at most k adjacencies. We complement these algorithmic findings by the following, surprisingly tight lower bound on the asymptotic behavior of our algorithm. We show that unless the Exponential Time Hypothesis (ETH) fails - for any constant 0 <= sigma <= 1, there is p = Theta(k^sigma) such that there is no algorithm deciding in time 2^{o(sqrt{pk})} n^{O(1)} whether an n-vertex graph G can be transformed into a cluster graph with at most p cliques by changing at most k adjacencies. Thus, our upper and lower bounds provide an asymptotically tight analysis of the multivariate parameterized complexity of the problem for the whole range of values of p from constant to a linear function of k.publishedVersio
    corecore