3,072 research outputs found

    Exponential Time Paradigms Through the Polynomial Time Lens

    Get PDF
    We propose a general approach to modelling algorithmic paradigms for the exact solution of NP-hard problems. Our approach is based on polynomial time reductions to succinct versions of problems solvable in polynomial time. We use this viewpoint to explore and compare the power of paradigms such as branching and dynamic programming, and to shed light on the true complexity of various problems. As one instantiation, we model branching using the notion of witness compression, i.e., reducibility to the circuit satisfiability problem parameterized by the number of variables of the circuit. We show this is equivalent to the previously studied notion of `OPP-algorithms\u27, and provide a technique for proving conditional lower bounds for witness compressions via a constructive variant of AND-composition, which is a notion previously studied in theory of preprocessing. In the context of parameterized complexity we use this to show that problems such as Pathwidth and Treewidth and Independent Set parameterized by pathwidth do not have witness compression, assuming NP subseteq coNP/poly. Since these problems admit fast fixed parameter tractable algorithms via dynamic programming, this shows that dynamic programming can be stronger than branching, under a standard complexity hypothesis. Our approach has applications outside parameterized complexity as well: for example, we show if a polynomial time algorithm outputs a maximum independent set of a given planar graph on n vertices with probability exp(-n^{1-epsilon}) for some epsilon>0, then NP subseteq coNP/poly. This negative result dims the prospects for one very natural approach to sub-exponential time algorithms for problems on planar graphs. As two other illustrations (more exploratory) of our approach, we model algorithms based on inclusion-exclusion or group algebras via the notion of "parity compression", and we model a subclass of dynamic programming algorithms with the notion of "disjunctive dynamic programming". These models give us a way to naturally classify various parameterized problems with FPT algorithms. In the case of the dynamic programming model, we show that Independent Set parameterized by pathwidth is complete for this model

    Entanglement, intractability and no-signaling

    Full text link
    We consider the problem of deriving the no-signaling condition from the assumption that, as seen from a complexity theoretic perspective, the universe is not an exponential place. A fact that disallows such a derivation is the existence of {\em polynomial superluminal} gates, hypothetical primitive operations that enable superluminal signaling but not the efficient solution of intractable problems. It therefore follows, if this assumption is a basic principle of physics, either that it must be supplemented with additional assumptions to prohibit such gates, or, improbably, that no-signaling is not a universal condition. Yet, a gate of this kind is possibly implicit, though not recognized as such, in a decade-old quantum optical experiment involving position-momentum entangled photons. Here we describe a feasible modified version of the experiment that appears to explicitly demonstrate the action of this gate. Some obvious counter-claims are shown to be invalid. We believe that the unexpected possibility of polynomial superluminal operations arises because some practically measured quantum optical quantities are not describable as standard quantum mechanical observables.Comment: 17 pages, 2 figures (REVTeX 4

    Quantum machine learning: a classical perspective

    Get PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde

    Quasi-polynomial Hitting-set for Set-depth-Delta Formulas

    Full text link
    We call a depth-4 formula C set-depth-4 if there exists a (unknown) partition (X_1,...,X_d) of the variable indices [n] that the top product layer respects, i.e. C(x) = \sum_{i=1}^k \prod_{j=1}^{d} f_{i,j}(x_{X_j}), where f_{i,j} is a sparse polynomial in F[x_{X_j}]. Extending this definition to any depth - we call a depth-Delta formula C (consisting of alternating layers of Sigma and Pi gates, with a Sigma-gate on top) a set-depth-Delta formula if every Pi-layer in C respects a (unknown) partition on the variables; if Delta is even then the product gates of the bottom-most Pi-layer are allowed to compute arbitrary monomials. In this work, we give a hitting-set generator for set-depth-Delta formulas (over any field) with running time polynomial in exp(({Delta}^2 log s)^{Delta - 1}), where s is the size bound on the input set-depth-Delta formula. In other words, we give a quasi-polynomial time blackbox polynomial identity test for such constant-depth formulas. Previously, the very special case of Delta=3 (also known as set-multilinear depth-3 circuits) had no known sub-exponential time hitting-set generator. This was declared as an open problem by Shpilka & Yehudayoff (FnT-TCS 2010); the model being first studied by Nisan & Wigderson (FOCS 1995). Our work settles this question, not only for depth-3 but, up to depth epsilon.log s / loglog s, for a fixed constant epsilon < 1. The technique is to investigate depth-Delta formulas via depth-(Delta-1) formulas over a Hadamard algebra, after applying a `shift' on the variables. We propose a new algebraic conjecture about the low-support rank-concentration in the latter formulas, and manage to prove it in the case of set-depth-Delta formulas.Comment: 22 page

    Commonality in the LME aluminium and copper volatility processes through a Figarch lens

    Get PDF
    We consider dynamic representation of spot and three month aluminium and copper volatilities. These are the two most important metals traded in the London Metal Exchange (LME). They share common business cycle factors and are traded under identical contract specifications. We apply the bivariate FIGARCH model which allows parsimonious representation of long memory volatility processes. Our results show that spot and three month aluminium and copper volatilities follow long memory processes, that they exhibit a common degree of fractional integration and that the processes are symmetric. However, there is no evidence that the processes are fractionally cointegrated. This high degree of commonality may result from the common LME trading process
    corecore