3,540 research outputs found

    Beta Reduction is Invariant, Indeed (Long Version)

    Full text link
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is λ\lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a λ\lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of λ\lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating λ\lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modelled after linear logic and proof-nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the λ\lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the λ\lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to β\beta-redexes, i.e., the steps that cause the blow-up in size.Comment: 29 page

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto

    Generalized Unitary Coupled Cluster Wavefunctions for Quantum Computation

    Get PDF
    We introduce a unitary coupled-cluster (UCC) ansatz termed kk-UpCCGSD that is based on a family of sparse generalized doubles (D) operators which provides an affordable and systematically improvable unitary coupled-cluster wavefunction suitable for implementation on a near-term quantum computer. kk-UpCCGSD employs kk products of the exponential of pair coupled-cluster double excitation operators (pCCD), together with generalized single (S) excitation operators. We compare its performance in both efficiency of implementation and accuracy with that of the generalized UCC ansatz employing the full generalized SD excitation operators (UCCGSD), as well as with the standard ansatz employing only SD excitations (UCCSD). kk-UpCCGSD is found to show the best scaling for quantum computing applications, requiring a circuit depth of O(kN)\mathcal O(kN), compared with O(N3)\mathcal O(N^3) for UCCGSD and O((Nη)2η)\mathcal O((N-\eta)^2 \eta) for UCCSD where NN is the number of spin orbitals and η\eta is the number of electrons. We analyzed the accuracy of these three ans\"atze by making classical benchmark calculations on the ground state and the first excited state of H4_4 (STO-3G, 6-31G), H2_2O (STO-3G), and N2_2 (STO-3G), making additional comparisons to conventional coupled cluster methods. The results for ground states show that kk-UpCCGSD offers a good tradeoff between accuracy and cost, achieving chemical accuracy for lower cost of implementation on quantum computers than both UCCGSD and UCCSD. Excited states are calculated with an orthogonally constrained variational quantum eigensolver approach. This is seen to generally yield less accurate energies than for the corresponding ground states. We demonstrate that using a specialized multi-determinantal reference state constructed from classical linear response calculations allows these excited state energetics to be improved

    On Sharing, Memoization, and Polynomial Time (Long Version)

    Get PDF
    We study how the adoption of an evaluation mechanism with sharing and memoization impacts the class of functions which can be computed in polynomial time. We first show how a natural cost model in which lookup for an already computed value has no cost is indeed invariant. As a corollary, we then prove that the most general notion of ramified recurrence is sound for polynomial time, this way settling an open problem in implicit computational complexity

    Projected Density Matrix Embedding Theory with Applications to the Two-Dimensional Hubbard Model

    Get PDF
    Density matrix embedding theory (DMET) is a quantum embedding theory for strongly correlated systems. From a computational perspective, one bottleneck in DMET is the optimization of the correlation potential to achieve self-consistency, especially for heterogeneous systems of large size. We propose a new method, called projected density matrix embedding theory (p-DMET), which achieves self-consistency without needing to optimize a correlation potential. We demonstrate the performance of p-DMET on the two-dimensional Hubbard model.Comment: 25 pages, 8 figure
    corecore