439 research outputs found

    Efficient solvability of Hamiltonians and limits on the power of some quantum computational models

    Full text link
    We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently ("exactly") solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.Comment: 6 pages; no figure

    Discovering the roots: Uniform closure results for algebraic classes under factoring

    Full text link
    Newton iteration (NI) is an almost 350 years old recursive formula that approximates a simple root of a polynomial quite rapidly. We generalize it to a matrix recurrence (allRootsNI) that approximates all the roots simultaneously. In this form, the process yields a better circuit complexity in the case when the number of roots rr is small but the multiplicities are exponentially large. Our method sets up a linear system in rr unknowns and iteratively builds the roots as formal power series. For an algebraic circuit f(x1,,xn)f(x_1,\ldots,x_n) of size ss we prove that each factor has size at most a polynomial in: ss and the degree of the squarefree part of ff. Consequently, if f1f_1 is a 2Ω(n)2^{\Omega(n)}-hard polynomial then any nonzero multiple ifiei\prod_{i} f_i^{e_i} is equally hard for arbitrary positive eie_i's, assuming that ideg(fi)\sum_i \text{deg}(f_i) is at most 2O(n)2^{O(n)}. It is an old open question whether the class of poly(nn)-sized formulas (resp. algebraic branching programs) is closed under factoring. We show that given a polynomial ff of degree nO(1)n^{O(1)} and formula (resp. ABP) size nO(logn)n^{O(\log n)} we can find a similar size formula (resp. ABP) factor in randomized poly(nlognn^{\log n})-time. Consequently, if determinant requires nΩ(logn)n^{\Omega(\log n)} size formula, then the same can be said about any of its nonzero multiples. As part of our proofs, we identify a new property of multivariate polynomial factorization. We show that under a random linear transformation τ\tau, f(τx)f(\tau\overline{x}) completely factors via power series roots. Moreover, the factorization adapts well to circuit complexity analysis. This with allRootsNI are the techniques that help us make progress towards the old open problems, supplementing the large body of classical results and concepts in algebraic circuit factorization (eg. Zassenhaus, J.NT 1969, Kaltofen, STOC 1985-7 \& Burgisser, FOCS 2001).Comment: 33 Pages, No figure

    Classical simulation of noninteracting-fermion quantum circuits

    Get PDF
    We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits (quant-ph/0006088).Comment: 26 pages, 1 figure, uses wick.sty; references added to recent results by E. Knil

    Set Similarity Search for Skewed Data

    Get PDF
    Set similarity join, as well as the corresponding indexing problem set similarity search, are fundamental primitives for managing noisy or uncertain data. For example, these primitives can be used in data cleaning to identify different representations of the same object. In many cases one can represent an object as a sparse 0-1 vector, or equivalently as the set of nonzero entries in such a vector. A set similarity join can then be used to identify those pairs that have an exceptionally large dot product (or intersection, when viewed as sets). We choose to focus on identifying vectors with large Pearson correlation, but results extend to other similarity measures. In particular, we consider the indexing problem of identifying correlated vectors in a set S of vectors sampled from {0,1}^d. Given a query vector y and a parameter alpha in (0,1), we need to search for an alpha-correlated vector x in a data structure representing the vectors of S. This kind of similarity search has been intensely studied in worst-case (non-random data) settings. Existing theoretically well-founded methods for set similarity search are often inferior to heuristics that take advantage of skew in the data distribution, i.e., widely differing frequencies of 1s across the d dimensions. The main contribution of this paper is to analyze the set similarity problem under a random data model that reflects the kind of skewed data distributions seen in practice, allowing theoretical results much stronger than what is possible in worst-case settings. Our indexing data structure is a recursive, data-dependent partitioning of vectors inspired by recent advances in set similarity search. Previous data-dependent methods do not seem to allow us to exploit skew in item frequencies, so we believe that our work sheds further light on the power of data dependence

    Improved Simulation of Stabilizer Circuits

    Full text link
    The Gottesman-Knill theorem says that a stabilizer circuit -- that is, a quantum circuit consisting solely of CNOT, Hadamard, and phase gates -- can be simulated efficiently on a classical computer. This paper improves that theorem in several directions. First, by removing the need for Gaussian elimination, we make the simulation algorithm much faster at the cost of a factor-2 increase in the number of bits needed to represent a state. We have implemented the improved algorithm in a freely-available program called CHP (CNOT-Hadamard-Phase), which can handle thousands of qubits easily. Second, we show that the problem of simulating stabilizer circuits is complete for the classical complexity class ParityL, which means that stabilizer circuits are probably not even universal for classical computation. Third, we give efficient algorithms for computing the inner product between two stabilizer states, putting any n-qubit stabilizer circuit into a "canonical form" that requires at most O(n^2/log n) gates, and other useful tasks. Fourth, we extend our simulation algorithm to circuits acting on mixed states, circuits containing a limited number of non-stabilizer gates, and circuits acting on general tensor-product initial states but containing only a limited number of measurements.Comment: 15 pages. Final version with some minor updates and corrections. Software at http://www.scottaaronson.com/ch

    Adiabatic quantum algorithm for search engine ranking

    Full text link
    We propose an adiabatic quantum algorithm for generating a quantum pure state encoding of the PageRank vector, the most widely used tool in ranking the relative importance of internet pages. We present extensive numerical simulations which provide evidence that this algorithm can prepare the quantum PageRank state in a time which, on average, scales polylogarithmically in the number of webpages. We argue that the main topological feature of the underlying web graph allowing for such a scaling is the out-degree distribution. The top ranked log(n)\log(n) entries of the quantum PageRank state can then be estimated with a polynomial quantum speedup. Moreover, the quantum PageRank state can be used in "q-sampling" protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes designed for the same task. This can be used to decide whether to run a classical update of the PageRank.Comment: 7 pages, 5 figures; closer to published versio

    Classical simulation of measurement-based quantum computation on higher-genus surface-code states

    Full text link
    We consider the efficiency of classically simulating measurement-based quantum computation on surface-code states. We devise a method for calculating the elements of the probability distribution for the classical output of the quantum computation. The operational cost of this method is polynomial in the size of the surface-code state, but in the worst case scales as 22g2^{2g} in the genus gg of the surface embedding the code. However, there are states in the code space for which the simulation becomes efficient. In general, the simulation cost is exponential in the entanglement contained in a certain effective state, capturing the encoded state, the encoding and the local post-measurement states. The same efficiencies hold, with additional assumptions on the temporal order of measurements and on the tessellations of the code surfaces, for the harder task of sampling from the distribution of the computational output.Comment: 21 pages, 13 figure

    A Distributed Multilevel Force-directed Algorithm

    Full text link
    The wide availability of powerful and inexpensive cloud computing services naturally motivates the study of distributed graph layout algorithms, able to scale to very large graphs. Nowadays, to process Big Data, companies are increasingly relying on PaaS infrastructures rather than buying and maintaining complex and expensive hardware. So far, only a few examples of basic force-directed algorithms that work in a distributed environment have been described. Instead, the design of a distributed multilevel force-directed algorithm is a much more challenging task, not yet addressed. We present the first multilevel force-directed algorithm based on a distributed vertex-centric paradigm, and its implementation on Giraph, a popular platform for distributed graph algorithms. Experiments show the effectiveness and the scalability of the approach. Using an inexpensive cloud computing service of Amazon, we draw graphs with ten million edges in about 60 minutes.Comment: Appears in the Proceedings of the 24th International Symposium on Graph Drawing and Network Visualization (GD 2016

    On Measuring Non-Recursive Trade-Offs

    Full text link
    We investigate the phenomenon of non-recursive trade-offs between descriptional systems in an abstract fashion. We aim at categorizing non-recursive trade-offs by bounds on their growth rate, and show how to deduce such bounds in general. We also identify criteria which, in the spirit of abstract language theory, allow us to deduce non-recursive tradeoffs from effective closure properties of language families on the one hand, and differences in the decidability status of basic decision problems on the other. We develop a qualitative classification of non-recursive trade-offs in order to obtain a better understanding of this very fundamental behaviour of descriptional systems
    corecore