4,147 research outputs found

    A survey on algorithmic aspects of modular decomposition

    Full text link
    The modular decomposition is a technique that applies but is not restricted to graphs. The notion of module naturally appears in the proofs of many graph theoretical theorems. Computing the modular decomposition tree is an important preprocessing step to solve a large number of combinatorial optimization problems. Since the first polynomial time algorithm in the early 70's, the algorithmic of the modular decomposition has known an important development. This paper survey the ideas and techniques that arose from this line of research

    Quantum Computing: Pro and Con

    Get PDF
    I assess the potential of quantum computation. Broad and important applications must be found to justify construction of a quantum computer; I review some of the known quantum algorithms and consider the prospects for finding new ones. Quantum computers are notoriously susceptible to making errors; I discuss recently developed fault-tolerant procedures that enable a quantum computer with noisy gates to perform reliably. Quantum computing hardware is still in its infancy; I comment on the specifications that should be met by future hardware. Over the past few years, work on quantum computation has erected a new classification of computational complexity, has generated profound insights into the nature of decoherence, and has stimulated the formulation of new techniques in high-precision experimental physics. A broad interdisciplinary effort will be needed if quantum computers are to fulfill their destiny as the world's fastest computing devices. (This paper is an expanded version of remarks that were prepared for a panel discussion at the ITP Conference on Quantum Coherence and Decoherence, 17 December 1996.)Comment: 17 pages, LaTeX, submitted to Proc. Roy. Soc. Lond. A, minor correction

    Efficient implementation of the Hardy-Ramanujan-Rademacher formula

    Full text link
    We describe how the Hardy-Ramanujan-Rademacher formula can be implemented to allow the partition function p(n)p(n) to be computed with softly optimal complexity O(n1/2+o(1))O(n^{1/2+o(1)}) and very little overhead. A new implementation based on these techniques achieves speedups in excess of a factor 500 over previously published software and has been used by the author to calculate p(1019)p(10^{19}), an exponent twice as large as in previously reported computations. We also investigate performance for multi-evaluation of p(n)p(n), where our implementation of the Hardy-Ramanujan-Rademacher formula becomes superior to power series methods on far denser sets of indices than previous implementations. As an application, we determine over 22 billion new congruences for the partition function, extending Weaver's tabulation of 76,065 congruences.Comment: updated version containing an unconditional complexity proof; accepted for publication in LMS Journal of Computation and Mathematic

    Reliable Quantum Computers

    Get PDF
    The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 10^6 qubits, with a probability of error per quantum gate of order 10^{-6}, would be a formidable factoring engine. Even a smaller, less accurate quantum computer would be able to perform many useful tasks. (This paper is based on a talk presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18 December 1996.)Comment: 24 pages, LaTeX, submitted to Proc. Roy. Soc. Lond. A, minor correction

    Harmonic and Refined Harmonic Shift-Invert Residual Arnoldi and Jacobi--Davidson Methods for Interior Eigenvalue Problems

    Full text link
    This paper concerns the harmonic shift-invert residual Arnoldi (HSIRA) and Jacobi--Davidson (HJD) methods as well as their refined variants RHSIRA and RHJD for the interior eigenvalue problem. Each method needs to solve an inner linear system to expand the subspace successively. When the linear systems are solved only approximately, we are led to the inexact methods. We prove that the inexact HSIRA, RHSIRA, HJD and RHJD methods mimic their exact counterparts well when the inner linear systems are solved with only low or modest accuracy. We show that (i) the exact HSIRA and HJD expand subspaces better than the exact SIRA and JD and (ii) the exact RHSIRA and RHJD expand subspaces better than the exact HSIRA and HJD. Based on the theory, we design stopping criteria for inner solves. To be practical, we present restarted HSIRA, HJD, RHSIRA and RHJD algorithms. Numerical results demonstrate that these algorithms are much more efficient than the restarted standard SIRA and JD algorithms and furthermore the refined harmonic algorithms outperform the harmonic ones very substantially.Comment: 15 pages, 4 figure

    Introduction to Quantum Information Processing

    Full text link
    As a result of the capabilities of quantum information, the science of quantum information processing is now a prospering, interdisciplinary field focused on better understanding the possibilities and limitations of the underlying theory, on developing new applications of quantum information and on physically realizing controllable quantum devices. The purpose of this primer is to provide an elementary introduction to quantum information processing, and then to briefly explain how we hope to exploit the advantages of quantum information. These two sections can be read independently. For reference, we have included a glossary of the main terms of quantum information.Comment: 48 pages, to appear in LA Science. Hyperlinked PDF at http://www.c3.lanl.gov/~knill/qip/prhtml/prpdf.pdf, HTML at http://www.c3.lanl.gov/~knill/qip/prhtm

    Modeling the growth of fingerprints improves matching for adolescents

    Full text link
    We study the effect of growth on the fingerprints of adolescents, based on which we suggest a simple method to adjust for growth when trying to recover a juvenile's fingerprint in a database years later. Based on longitudinal data sets in juveniles' criminal records, we show that growth essentially leads to an isotropic rescaling, so that we can use the strong correlation between growth in stature and limbs to model the growth of fingerprints proportional to stature growth as documented in growth charts. The proposed rescaling leads to a 72% reduction of the distances between corresponding minutiae for the data set analyzed. These findings were corroborated by several verification tests. In an identification test on a database containing 3.25 million right index fingers at the Federal Criminal Police Office of Germany, the identification error rate of 20.8% was reduced to 2.1% by rescaling. The presented method is of striking simplicity and can easily be integrated into existing automated fingerprint identification systems
    corecore