1,885 research outputs found

    Statistical Assertions for Validating Patterns and Finding Bugs in Quantum Programs

    Full text link
    In support of the growing interest in quantum computing experimentation, programmers need new tools to write quantum algorithms as program code. Compared to debugging classical programs, debugging quantum programs is difficult because programmers have limited ability to probe the internal states of quantum programs; those states are difficult to interpret even when observations exist; and programmers do not yet have guidelines for what to check for when building quantum programs. In this work, we present quantum program assertions based on statistical tests on classical observations. These allow programmers to decide if a quantum program state matches its expected value in one of classical, superposition, or entangled types of states. We extend an existing quantum programming language with the ability to specify quantum assertions, which our tool then checks in a quantum program simulator. We use these assertions to debug three benchmark quantum programs in factoring, search, and chemistry. We share what types of bugs are possible, and lay out a strategy for using quantum programming patterns to place assertions and prevent bugs.Comment: In The 46th Annual International Symposium on Computer Architecture (ISCA '19). arXiv admin note: text overlap with arXiv:1811.0544

    A methodology for producing reliable software, volume 1

    Get PDF
    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software

    Using Statistical Assertions to Guide Self-Adaptive Systems

    Get PDF
    Self-adaptive systems need to monitor themselves, to check their internal behaviour and design assumptions about runtime inputs and conditions. This kind of monitoring for self-adaptive systems can include collecting statistics about such systems themselves which can be computationally intensive (for detailed statistics) and hence time consuming, with possible negative impact on self-adaptive response time. To mitigate this limitation, we extend the technique of in-circuit runtime assertions to cover statistical assertions in hardware. The presented designs implement several statistical operators that can be exploited by self-adaptive systems; a novel optimization is developed for reducing the number of pairwise operators from ON to Olog⁡N. To illustrate the practicability and industrial relevance of our proposed approach, we evaluate our designs, chosen from a class of possible application scenarios, for their resource usage and the tradeoffs between hardware and software implementations

    q-State Potts model metastability study using optimized GPU-based Monte Carlo algorithms

    Get PDF
    We implemented a GPU based parallel code to perform Monte Carlo simulations of the two dimensional q-state Potts model. The algorithm is based on a checkerboard update scheme and assigns independent random numbers generators to each thread. The implementation allows to simulate systems up to ~10^9 spins with an average time per spin flip of 0.147ns on the fastest GPU card tested, representing a speedup up to 155x, compared with an optimized serial code running on a high-end CPU. The possibility of performing high speed simulations at large enough system sizes allowed us to provide a positive numerical evidence about the existence of metastability on very large systems based on Binder's criterion, namely, on the existence or not of specific heat singularities at spinodal temperatures different of the transition one.Comment: 30 pages, 7 figures. Accepted in Computer Physics Communications. code available at: http://www.famaf.unc.edu.ar/grupos/GPGPU/Potts/CUDAPotts.htm

    A Content Analysis-Based Approach to Explore Simulation Verification and Identify Its Current Challenges

    Get PDF
    Verification is a crucial process to facilitate the identification and removal of errors within simulations. This study explores semantic changes to the concept of simulation verification over the past six decades using a data-supported, automated content analysis approach. We collect and utilize a corpus of 4,047 peer-reviewed Modeling and Simulation (M&S) publications dealing with a wide range of studies of simulation verification from 1963 to 2015. We group the selected papers by decade of publication to provide insights and explore the corpus from four perspectives: (i) the positioning of prominent concepts across the corpus as a whole; (ii) a comparison of the prominence of verification, validation, and Verification and Validation (V&V) as separate concepts; (iii) the positioning of the concepts specifically associated with verification; and (iv) an evaluation of verification\u27s defining characteristics within each decade. Our analysis reveals unique characterizations of verification in each decade. The insights gathered helped to identify and discuss three categories of verification challenges as avenues of future research, awareness, and understanding for researchers, students, and practitioners. These categories include conveying confidence and maintaining ease of use; techniques\u27 coverage abilities for handling increasing simulation complexities; and new ways to provide error feedback to model users

    BugDoc: Algorithms to Debug Computational Processes

    Get PDF
    Data analysis for scientific experiments and enterprises, large-scale simulations, and machine learning tasks all entail the use of complex computational pipelines to reach quantitative and qualitative conclusions. If some of the activities in a pipeline produce erroneous outputs, the pipeline may fail to execute or produce incorrect results. Inferring the root cause(s) of such failures is challenging, usually requiring time and much human thought, while still being error-prone. We propose a new approach that makes use of iteration and provenance to automatically infer the root causes and derive succinct explanations of failures. Through a detailed experimental evaluation, we assess the cost, precision, and recall of our approach compared to the state of the art. Our experimental data and processing software is available for use, reproducibility, and enhancement.Comment: To appear in SIGMOD 2020. arXiv admin note: text overlap with arXiv:2002.0464

    A quantitative analysis of U.S. and Japanese software-engineering practice and performance

    Get PDF
    "For presentation at TIMS XXIX, July 23-26, 1989, Osaka, Japan." "June 1989."Includes bibliographical references.Michael A. Cusumano and Chris F. Kemerer
    corecore