492 research outputs found

    Trusting Computations: a Mechanized Proof from Partial Differential Equations to Actual Program

    Get PDF
    Computer programs may go wrong due to exceptional behaviors, out-of-bound array accesses, or simply coding errors. Thus, they cannot be blindly trusted. Scientific computing programs make no exception in that respect, and even bring specific accuracy issues due to their massive use of floating-point computations. Yet, it is uncommon to guarantee their correctness. Indeed, we had to extend existing methods and tools for proving the correct behavior of programs to verify an existing numerical analysis program. This C program implements the second-order centered finite difference explicit scheme for solving the 1D wave equation. In fact, we have gone much further as we have mechanically verified the convergence of the numerical scheme in order to get a complete formal proof covering all aspects from partial differential equations to actual numerical results. To the best of our knowledge, this is the first time such a comprehensive proof is achieved.Comment: N° RR-8197 (2012). arXiv admin note: text overlap with arXiv:1112.179

    The Quantum PCP Conjecture

    Full text link
    The classical PCP theorem is arguably the most important achievement of classical complexity theory in the past quarter century. In recent years, researchers in quantum computational complexity have tried to identify approaches and develop tools that address the question: does a quantum version of the PCP theorem hold? The story of this study starts with classical complexity and takes unexpected turns providing fascinating vistas on the foundations of quantum mechanics, the global nature of entanglement and its topological properties, quantum error correction, information theory, and much more; it raises questions that touch upon some of the most fundamental issues at the heart of our understanding of quantum mechanics. At this point, the jury is still out as to whether or not such a theorem holds. This survey aims to provide a snapshot of the status in this ongoing story, tailored to a general theory-of-CS audience.Comment: 45 pages, 4 figures, an enhanced version of the SIGACT guest column from Volume 44 Issue 2, June 201

    Quantum Proofs

    Get PDF
    Quantum information and computation provide a fascinating twist on the notion of proofs in computational complexity theory. For instance, one may consider a quantum computational analogue of the complexity class \class{NP}, known as QMA, in which a quantum state plays the role of a proof (also called a certificate or witness), and is checked by a polynomial-time quantum computation. For some problems, the fact that a quantum proof state could be a superposition over exponentially many classical states appears to offer computational advantages over classical proof strings. In the interactive proof system setting, one may consider a verifier and one or more provers that exchange and process quantum information rather than classical information during an interaction for a given input string, giving rise to quantum complexity classes such as QIP, QSZK, and QMIP* that represent natural quantum analogues of IP, SZK, and MIP. While quantum interactive proof systems inherit some properties from their classical counterparts, they also possess distinct and uniquely quantum features that lead to an interesting landscape of complexity classes based on variants of this model. In this survey we provide an overview of many of the known results concerning quantum proofs, computational models based on this concept, and properties of the complexity classes they define. In particular, we discuss non-interactive proofs and the complexity class QMA, single-prover quantum interactive proof systems and the complexity class QIP, statistical zero-knowledge quantum interactive proof systems and the complexity class \class{QSZK}, and multiprover interactive proof systems and the complexity classes QMIP, QMIP*, and MIP*.Comment: Survey published by NOW publisher

    Independent Configurable Architecture for Reliable Operation of Unmanned Systems with Distributed Onboard Services

    Get PDF
    This paper presents the development of ICAROUS-2 (Independent Configurable Architecture for Reliable Operation of Unmanned Systems with Distributed Onboard Services), the second generation of a software architecture that integrates several algorithms as distributed onboard services to enable robust autonomous UAS applications. In particular, the ICAROUS architecture defines a framework to perform detect and avoid, geofencing, path monitoring, path planning, and autonomous decision making to ensure safety and mission progress. Most of the core algorithms implemented in ICAROUS are formally verified using an interactive theorem prover. These algorithms are composed together using a plan execution engine, whose operational semantics is formally specified. A description of the integrated architecture, services currently available, and flight test results highlighting the capability of ICAROUS are presented

    A System for Deduction-based Formal Verification of Workflow-oriented Software Models

    Full text link
    The work concerns formal verification of workflow-oriented software models using deductive approach. The formal correctness of a model's behaviour is considered. Manually building logical specifications, which are considered as a set of temporal logic formulas, seems to be the significant obstacle for an inexperienced user when applying the deductive approach. A system, and its architecture, for the deduction-based verification of workflow-oriented models is proposed. The process of inference is based on the semantic tableaux method which has some advantages when compared to traditional deduction strategies. The algorithm for an automatic generation of logical specifications is proposed. The generation procedure is based on the predefined workflow patterns for BPMN, which is a standard and dominant notation for the modeling of business processes. The main idea for the approach is to consider patterns, defined in terms of temporal logic,as a kind of (logical) primitives which enable the transformation of models to temporal logic formulas constituting a logical specification. Automation of the generation process is crucial for bridging the gap between intuitiveness of the deductive reasoning and the difficulty of its practical application in the case when logical specifications are built manually. This approach has gone some way towards supporting, hopefully enhancing our understanding of, the deduction-based formal verification of workflow-oriented models.Comment: International Journal of Applied Mathematics and Computer Scienc

    Automated UML models merging for web services testing

    Get PDF
    International audienceThis paper presents a method for merging UML models which takes place in a quality evaluation framework for Web Services (WS). This framework, called iTac-QoS, is an extended UDDI server (a yellow pages system dedicated to WS), using model based testing to assess quality. WS vendors have to create UML model of their product and our framework extracts tests from it. Depending on the results of the test execution, a mark is given to WS. This mark gives to the cus- tomers an idea about the quality of WS they find on our UDDI server. Up today, our framework was limited to WS which did not use other WS. This was justified by the fact that it is impossible for vendors to cre- ate a good model of a foreign product. Our method for model merging solves this problem: each vendor produces models of its own product, and we automatically merge the different models. The resulting model from this merging represents the composition of the different WS. For each type of diagram present in the models (class, instance or state- chart diagram), a method is proposed in order to produce a unique model. In addition to this, a solution is proposed to merge all OCL code in the class modeling the WS under test. Unfortunately, this pro- cess introduces inconsistencies in the resulting model, that falsify the results of the subsequent test generation phase. We thus propose to detect such inconsistencies in order to distinguish inconsistent and un- reachable test targets

    A framework for certification of large-scale component-based parallel computing systems in a cloud computing platform for HPC services

    Get PDF
    This paper addresses the verification of software components in the context of their orchestration to build cloud-based scientific applications with high performance computing requirements. In such a scenario, components are often supplied by different sources and their cooperation rely on assumptions of conformity with their published behavioral interfaces. Therefore, a faulty or ill-designed component, failing to obey to the envisaged behavioral requirements, may have dramatic consequences in practice. Certifier components, introduced in this paper, implement a verification as a service framework and are able to access the implementation of other components and verify their consistency with respect to a number of functional, safety and liveness requirements relevant to a specific application or a class of them. It is shown how certifier components can be smoothly integrated in HPC Shelf, a cloud-based platform for high performance computing in which different sorts of users can design, deploy and execute scientific applications.SmartEGOV: Harnessing EGOV for Smart Governance (Foundations, methods, Tools) / NORTE-01-0145-FEDER000037, supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (EFD

    A component-based framework for certification of components in a cloud of HPC services

    Get PDF
    HPC Shelfis a proposal of a cloud computing platform to provide component-oriented services for High Performance Computing (HPC) applications. This paper presents a Verification-as-a-Service (VaaS) framework for component certification onHPC Shelf. Certification is aimed at providing higher confidence that components of parallel computing systems ofHPC Shelfbehave as expected according to one or more requirements expressed in their contracts. To this end, new abstractions are introduced, starting with certifier components. They are designed to inspect other components and verify them for different types of functional, non-functional and behavioral requirements. The certification framework is naturally based on parallel computing techniques to speed up verification tasks.NORTE-01-0145- FEDER-000037
    corecore