15 research outputs found

    The Equivalence of Sampling and Searching

    Get PDF
    In a sampling problem, we are given an input x, and asked to sample approximately from a probability distribution D_x. In a search problem, we are given an input x, and asked to find a member of a nonempty set A_x with high probability. (An example is finding a Nash equilibrium.) In this paper, we use tools from Kolmogorov complexity and algorithmic information theory to show that sampling and search problems are essentially equivalent. More precisely, for any sampling problem S, there exists a search problem R_S such that, if C is any "reasonable" complexity class, then R_S is in the search version of C if and only if S is in the sampling version. As one application, we show that SampP=SampBQP if and only if FBPP=FBQP: in other words, classical computers can efficiently sample the output distribution of every quantum circuit, if and only if they can efficiently solve every search problem that quantum computers can solve. A second application is that, assuming a plausible conjecture, there exists a search problem R that can be solved using a simple linear-optics experiment, but that cannot be solved efficiently by a classical computer unless the polynomial hierarchy collapses. That application will be described in a forthcoming paper with Alex Arkhipov on the computational complexity of linear optics.Comment: 16 page

    Why Philosophers Should Care About Computational Complexity

    Get PDF
    One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and beyond," MIT Press, 2012. Some minor clarifications and corrections; new references adde

    Time-Space Lower Bounds for Simulating Proof Systems with Quantum and Randomized Verifiers

    Get PDF
    A line of work initiated by Fortnow in 1997 has proven model-independent time-space lower bounds for the SAT\mathsf{SAT} problem and related problems within the polynomial-time hierarchy. For example, for the SAT\mathsf{SAT} problem, the state-of-the-art is that the problem cannot be solved by random-access machines in ncn^c time and no(1)n^{o(1)} space simultaneously for c<2cos(π7)1.801c < 2\cos(\frac{\pi}{7}) \approx 1.801. We extend this lower bound approach to the quantum and randomized domains. Combining Grover's algorithm with components from SAT\mathsf{SAT} time-space lower bounds, we show that there are problems verifiable in O(n)O(n) time with quantum Merlin-Arthur protocols that cannot be solved in ncn^c time and no(1)n^{o(1)} space simultaneously for c<3+322.366c < \frac{3+\sqrt{3}}{2} \approx 2.366, a super-quadratic time lower bound. This result and the prior work on SAT\mathsf{SAT} can both be viewed as consequences of a more general formula for time lower bounds against small-space algorithms, whose asymptotics we study in full. We also show lower bounds against randomized algorithms: there are problems verifiable in O(n)O(n) time with (classical) Merlin-Arthur protocols that cannot be solved in ncn^c randomized time and no(1)n^{o(1)} space simultaneously for c<1.465c < 1.465, improving a result of Diehl. For quantum Merlin-Arthur protocols, the lower bound in this setting can be improved to c<1.5c < 1.5.Comment: 38 pages, 5 figures. To appear in ITCS 202

    The Computational Complexity of Linear Optics

    Full text link
    We give new evidence that quantum computers -- moreover, rudimentary quantum computers built entirely out of linear-optical elements -- cannot be efficiently simulated by classical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions. Our first result says that, if there exists a polynomial-time classical algorithm that samples from the same probability distribution as a linear-optical network, then P^#P=BPP^NP, and hence the polynomial hierarchy collapses to the third level. Unfortunately, this result assumes an extremely accurate simulation. Our main result suggests that even an approximate or noisy classical simulation would already imply a collapse of the polynomial hierarchy. For this, we need two unproven conjectures: the "Permanent-of-Gaussians Conjecture", which says that it is #P-hard to approximate the permanent of a matrix A of independent N(0,1) Gaussian entries, with high probability over A; and the "Permanent Anti-Concentration Conjecture", which says that |Per(A)|>=sqrt(n!)/poly(n) with high probability over A. We present evidence for these conjectures, both of which seem interesting even apart from our application. This paper does not assume knowledge of quantum optics. Indeed, part of its goal is to develop the beautiful theory of noninteracting bosons underlying our model, and its connection to the permanent function, in a self-contained way accessible to theoretical computer scientists.Comment: 94 pages, 4 figure

    Three Puzzles on Mathematics, Computation, and Games

    Full text link
    In this lecture I will talk about three mathematical puzzles involving mathematics and computation that have preoccupied me over the years. The first puzzle is to understand the amazing success of the simplex algorithm for linear programming. The second puzzle is about errors made when votes are counted during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes

    Causal loops: logically consistent correlations, time travel, and computation

    Get PDF
    Causal loops are loops in cause-effect chains: An effect can be the cause of that effect's cause. We show that causal loops can be unproblematic, and explore them from different points of view. This thesis is motivated by quantum theory, general relativity, and quantum gravity. By accepting all of quantum theory one can ask whether the possibility to take superpositions extends to causal structures. Then again, quantum theory comes with conceptual problems: Can we overcome these problems by dropping causality? General relativity is consistent with space-time geometries that allow for time-travel: What happens to systems traveling along closed time-like curves, are there reasons to rule out the existence of closed time-like curves in nature? Finally, a candidate for a theory of quantum gravity is quantum theory with a different, relaxed space-time geometry. Motivated by these questions, we explore the classical world of the non-causal. This world is non-empty; and what can happen in such a world is sometimes weird, but not too crazy. What is weird is that in these worlds, a party (or event) can be in the future and in the past of some other party (time travel). What is not too crazy is that this theoretical possibility does not lead to any contradiction. Moreover, one can identify logical consistency with the existence of a unique fixed point in a cause-effect chain. This can be understood as follows: No fixed point is the same as having a contradiction (too stiff), multiple fixed points, then again, is the same as having an unspecified system (too loose). This leads to a series of results in that field: Characterization of classical non-causal correlations, closed time- like curves that do not restrict the actions of experimenters, and a self-referential model of computation. We study the computational power of this model and use it to upper bound the computational power of closed time-like curves. Time travel has ever since been term weird, what we show here, however, is that time travel is not too crazy: It is not possible to solve hard problems by traveling through time. Finally, we apply our results on causal loops to other fields: an analysis with Kolmogorov complexity, local and classical simulation of PR-box correlations with closed time-like curves, and a short note on self-referentiality in language
    corecore