8,758 research outputs found

    On the Complexity and Approximation of Binary Evidence in Lifted Inference

    Full text link
    Lifted inference algorithms exploit symmetries in probabilistic models to speed up inference. They show impressive performance when calculating unconditional probabilities in relational models, but often resort to non-lifted inference when computing conditional probabilities. The reason is that conditioning on evidence breaks many of the model's symmetries, which can preempt standard lifting techniques. Recent theoretical results show, for example, that conditioning on evidence which corresponds to binary relations is #P-hard, suggesting that no lifting is to be expected in the worst case. In this paper, we balance this negative result by identifying the Boolean rank of the evidence as a key parameter for characterizing the complexity of conditioning in lifted inference. In particular, we show that conditioning on binary evidence with bounded Boolean rank is efficient. This opens up the possibility of approximating evidence by a low-rank Boolean matrix factorization, which we investigate both theoretically and empirically.Comment: To appear in Advances in Neural Information Processing Systems 26 (NIPS), Lake Tahoe, USA, December 201

    On the sum-of-squares degree of symmetric quadratic functions

    Get PDF
    We study how well functions over the boolean hypercube of the form fk(x)=(∣x∣−k)(∣x∣−k−1)f_k(x)=(|x|-k)(|x|-k-1) can be approximated by sums of squares of low-degree polynomials, obtaining good bounds for the case of approximation in ℓ∞\ell_{\infty}-norm as well as in ℓ1\ell_1-norm. We describe three complexity-theoretic applications: (1) a proof that the recent breakthrough lower bound of Lee, Raghavendra, and Steurer on the positive semidefinite extension complexity of the correlation and TSP polytopes cannot be improved further by showing better sum-of-squares degree lower bounds on ℓ1\ell_1-approximation of fkf_k; (2) a proof that Grigoriev's lower bound on the degree of Positivstellensatz refutations for the knapsack problem is optimal, answering an open question from his work; (3) bounds on the query complexity of quantum algorithms whose expected output approximates such functions.Comment: 33 pages. Second version fixes some typos and adds reference

    Quantum machine learning: a classical perspective

    Get PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde

    Algorithms for Approximate Subtropical Matrix Factorization

    Get PDF
    Matrix factorization methods are important tools in data mining and analysis. They can be used for many tasks, ranging from dimensionality reduction to visualization. In this paper we concentrate on the use of matrix factorizations for finding patterns from the data. Rather than using the standard algebra -- and the summation of the rank-1 components to build the approximation of the original matrix -- we use the subtropical algebra, which is an algebra over the nonnegative real values with the summation replaced by the maximum operator. Subtropical matrix factorizations allow "winner-takes-it-all" interpretations of the rank-1 components, revealing different structure than the normal (nonnegative) factorizations. We study the complexity and sparsity of the factorizations, and present a framework for finding low-rank subtropical factorizations. We present two specific algorithms, called Capricorn and Cancer, that are part of our framework. They can be used with data that has been corrupted with different types of noise, and with different error metrics, including the sum-of-absolute differences, Frobenius norm, and Jensen--Shannon divergence. Our experiments show that the algorithms perform well on data that has subtropical structure, and that they can find factorizations that are both sparse and easy to interpret.Comment: 40 pages, 9 figures. For the associated source code, see http://people.mpi-inf.mpg.de/~pmiettin/tropical
    • …
    corecore