1,141 research outputs found

    Soft Contract Verification

    Full text link
    Behavioral software contracts are a widely used mechanism for governing the flow of values between components. However, run-time monitoring and enforcement of contracts imposes significant overhead and delays discovery of faulty components to run-time. To overcome these issues, we present soft contract verification, which aims to statically prove either complete or partial contract correctness of components, written in an untyped, higher-order language with first-class contracts. Our approach uses higher-order symbolic execution, leveraging contracts as a source of symbolic values including unknown behavioral values, and employs an updatable heap of contract invariants to reason about flow-sensitive facts. We prove the symbolic execution soundly approximates the dynamic semantics and that verified programs can't be blamed. The approach is able to analyze first-class contracts, recursive data structures, unknown functions, and control-flow-sensitive refinements of values, which are all idiomatic in dynamic languages. It makes effective use of an off-the-shelf solver to decide problems without heavy encodings. The approach is competitive with a wide range of existing tools---including type systems, flow analyzers, and model checkers---on their own benchmarks.Comment: ICFP '14, September 1-6, 2014, Gothenburg, Swede

    Method for Statically Checking an Object-oriented Computer Program Module

    Get PDF
    A method for statically checking an object-oriented computer program module includes the step of identifying objects within a computer program module, at least one of the objects having a plurality of references thereto, possibly from multiple clients. A discipline of permissions is imposed on the objects identified within the computer program module. The permissions enable tracking, from among a discrete set of changeable states, a subset of states each object might be in. A determination is made regarding whether the imposed permissions are violated by a potential reference to any of the identified objects. The results of the determination are output to a user

    Generic refinements for behavioral specifications

    Get PDF
    This thesis investigates the properties of generic refinements of behavioral specifications. At the base of this investigation stands the view from algebraic specification that abstract data types can be modeled as algebras. A specification of a data type is formed from a syntactic part, i.e. a signature detailing the interface of the data type, and a semantic part, i.e. a class of algebras (called its models) that contains the valid implementations of that data type. Typically, the class of algebras that constitutes the semantics of a specification is defined as the class of algebras that satisfy some given set of axioms. The behavioral aspect of a specification comes from relaxing the requirements imposed by axioms, i.e. by allowing in the semantics of a specification not only the algebras that literally satisfy the given axioms, but also those algebras that appear to behave according to those axioms. Several frameworks have been developed to express the adequate notions of what it means to be a behavioral model of a set of axioms, and our choice as the setting for this thesis will be Bidoit and Hennicker’s Constructor-based Observational Logic, abbreviated COL. Using specifications that rely on the behavioral aspects defined by COL we study the properties of generic refinements between specifications. Refinement is a relation between specifications. The refinement of a target specification by a source specification is given by a function that constructs models of the target specification from the models of the source specification. These functions are called constructions and the source and target specifications that they relate are called the context of the refinement. The theory of refinements between algebraic specifications, with or without the behavioral aspect, has been well studied in the literature. Our analysis starts from those studies and adapts them to COL, which is a relatively new framework, and for which refinement has been studied only briefly. The main part of this thesis is formed by the analysis of generic refinements. Generic refinements are represented by constructions that can be used in various contexts, not just in the context of their definition. These constructions provide the basis for modular refinements, i.e. one can use a locally defined construction in a global context in order to refine just a part of a source specification. The ability to use a refinement outside its original context imposes additional requirements on the construction that represents it. An implementer writing such a construction must not use details of the source models that can be contradicted by potential global context requirements. This means, roughly speaking, that he must use only the information available in the source signature and also any a priori assumption that was made about the contexts of use. We look at the basic case of generic refinements that are reusable in every global context, and then we treat a couple of variations, i.e. generic refinements for which an a priori assumption it is made about the nature of their usage contexts. In each of these cases we follow the same pattern of investigation. First we characterize the constructions that ensure reusability by means of preservation of relations, and then, in most cases, we show that such constructions must be definable in terms of their source signature. Throughout the thesis we use an informal analogy between generic (i.e. polymorphic) functions that appear in second order lambda calculus and the generic refinements that we are studying. This connection will enable us to describe some properties of generic refinements that correspond to the properties of polymorphic functions inferred from their types and named “theorems for free” by Wadler. The definability results, the connection between the assumptions made about the usage contexts and the characterizing relations, and the “theorems for free” for behavioral specifications constitute the main contributions of this thesis

    Refinement via interpretation

    Get PDF
    Traditional notions of refinement of algebraic specifications, based on signature morphisms, are often too rigid to capture a number of relevant transformations in the context of software design, reuse and adaptation. This paper proposes an alternative notion of specification refinement, building on recent work on logic interpretation. The concept is discussed, its theory partially developed, its use illustrated through a number of examples.This research was supported by FCT (the Portuguese Foundation for Science and Technology) under contracts PTDC/EIA/73252/2006, at Minho University, as well as PTDC/MAT/68723/2006 and the Unidade de Investigacao Matematica e Aplicaoes of University of Aveiro

    A type-theoretic framework for certified model transformations

    Get PDF
    "We present a framework based on the Calculus of Inductive Constructions (CIC) and its associated tool the Coq proof assistant to allow certification of model transformations in the context of Model-Driven Engineering (MDE). The approached is based on a semi-automatic translation process from metamodels, models and transformations of the MDE technical space into types, propositions and functions of the CIC technical space. We describe this translation and illustrate its use in a standard case study." [Abstract

    Type-driven Synthesis of Evolving Data Mode

    Get PDF
    Modern commercial software is often framed under the umbrella of data-centric applications. Data-centric applications define data as the main and permanent asset. These applications use a single data model for application functionality, data management, and analytical activities, which is built before the applications. Moreover, since applications are temporary, in contrast to data, there is the need to continuously evolve and change the data schema to accommodate new functionality. In this sense, the continuously evolving (rich) feature set that is expected of state-of-the-art applications is intrinsically bound by not only the amount of available data but also by its structure, its internal dependencies, and by the ability to transparently and uniformly grow and evolve data representations and their properties on the fly. The GOLEM project aims to produce new methods of program automation integrated in the development of data-centric applications in low-code frameworks. In this context, one of the key targets for automation is the data layer itself, encompassing the data layout and its integrity constraints, as well as validation and access control rules. The aim of this dissertation, which is integrated in GOLEM, is to develop a synthesis framework that, based on high-level specifications, correctly defines and evolves a rich data layer component by means of high-level operations. The construction of the framework was approached by defining a specification language to express richly-typed specifications, a target language which is the goal of synthesis and a type-directed synthesis procedure based on proof-search concepts. The range of real database operations the framework is able to synthesize is demonstrated through a case study. In a component-based synthesis style, with an extensible library of base operations on database tables (specified using the target language) in context, the case study shows that the synthesis framework is capable of expressing and solving a wide variety of data schema creation and evolution problems.Os sistemas modernos de software comercial são frequentemente caracterizados como aplicações centradas em dados. Estas aplicações definem os dados como o seu principal e persistente ativo, e utilizam um único modelo de dados para as suas funcionalidades, gestão de dados, e atividades analíticas. Além disso, uma vez que as aplicações são efémeras, contrariamente aos dados, existe a necessidade de continuamente evoluir o esquema de dados para introduzir novas funcionalidades. Neste sentido, o conjunto rico de características e em constante evolução que é esperado das aplicações modernas encontra-se restricto, não só pela quantidade de dados disponíveis, mas também pela sua estrutura, dependências internas, e a capacidade de crescer e evoluir a representação dos dados de uma forma uniforme e rápida. O projeto GOLEM tem como objetivo a produção de novos métodos de automação de programas integrado no desenvolvimento de aplicações centradas nos dados em sistemas low-code. Neste contexto, um dos objetivos principais de automação é a camada de dados, compreendendo a estrutura dos dados e as respectivas condições de integridade, como também as regras de validação e controlo de acessos. O objetivo desta dissertação, integrada no projeto GOLEM, é o desenvolvimento de um sistema de síntese que, baseado em especificações de alto nível, define e evolui corretamente uma camada de dados rica com recurso a operações de alto nível. A construção deste sistema baseia-se na definição de uma linguagem de especificação que permite definir especificações com tipos ricos, uma linguagem de expressões que é considerada o objetivo da síntese e um procedimento de síntese orientada pelos tipos. O espectro de operações reais de bases de dados que o sistema consegue sintetizar é demonstrado através de um caso de estudo. Com uma biblioteca extensível de operações sobre tabelas no contexto, o caso de estudo demonstra que o sistema de síntese é capaz de expressar e resolver uma grande variedade de problemas de criação e evolução de esquemas de dados

    Proceedings of the First NASA Formal Methods Symposium

    Get PDF
    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000

    Integrating formal methods into medical software development : the ASM approach

    Get PDF
    Medical devices are safety-critical systems since their malfunctions can seriously compromise human safety. Correct operation of a medical device depends upon the controlling software, whose development should adhere to certification standards. However, these standards provide general descriptions of common software engineering activities without any indication regarding particular methods and techniques to assure safety and reliability. This paper discusses how to integrate the use of a formal approach into the current normative for the medical software development. The rigorous process is based on the Abstract State Machine (ASM) formal method, its refinement principle, and model analysis approaches the method supports. The hemodialysis machine case study is used to show how the ASM-based design process covers most of the engineering activities required by the related standards, and provides rigorous approaches for medical software validation and verification

    Automated and foundational verification of low-level programs

    Get PDF
    Formal verification is a promising technique to ensure the reliability of low-level programs like operating systems and hypervisors, since it can show the absence of whole classes of bugs and prevent critical vulnerabilities. However, to realize the full potential of formal verification for real-world low-level programs one has to overcome several challenges, including: (1) dealing with the complexities of realistic models of real-world programming languages; (2) ensuring the trustworthiness of the verification, ideally by providing foundational proofs (i.e., proofs that can be checked by a general-purpose proof assistant); and (3) minimizing the manual effort required for verification by providing a high degree of automation. This dissertation presents multiple projects that advance formal verification along these three axes: RefinedC provides the first approach for verifying C code that combines foundational proofs with a high degree of automation via a novel refinement and ownership type system. Islaris shows how to scale verification of assembly code to realistic models of modern instruction set architectures-in particular, Armv8-A and RISC-V. DimSum develops a decentralized approach for reasoning about programs that consist of components written in multiple different languages (e.g., assembly and C), as is common for low-level programs. RefinedC and Islaris rest on Lithium, a novel proof engine for separation logic that combines automation with foundational proofs.Formale Verifikation ist eine vielversprechende Technik, um die Verlässlichkeit von grundlegenden Programmen wie Betriebssystemen sicherzustellen. Um das volle Potenzial formaler Verifikation zu realisieren, müssen jedoch mehrere Herausforderungen gemeistert werden: Erstens muss die Komplexität von realistischen Modellen von Programmiersprachen wie C oder Assembler gehandhabt werden. Zweitens muss die Vertrauenswürdigkeit der Verifikation sichergestellt werden, idealerweise durch maschinenüberprüfbare Beweise. Drittens muss die Verifikation automatisiert werden, um den manuellen Aufwand zu minimieren. Diese Dissertation präsentiert mehrere Projekte, die formale Verifikation entlang dieser Achsen weiterentwickeln: RefinedC ist der erste Ansatz für die Verifikation von C Code, der maschinenüberprüfbare Beweise mit einem hohen Grad an Automatisierung vereint. Islaris zeigt, wie die Verifikation von Assembler zu realistischen Modellen von modernen Befehlssatzarchitekturen wie Armv8-A oder RISC-V skaliert werden kann. DimSum entwickelt einen neuen Ansatz für die Verifizierung von Programmen, die aus Komponenten in mehreren Programmiersprachen bestehen (z.B., C und Assembler), wie es oft bei grundlegenden Programmen wie Betriebssystemen der Fall ist. RefinedC und Islaris basieren auf Lithium, eine neue Automatisierungstechnik für Separationslogik, die maschinenüberprüfbare Beweise und Automatisierung verbindet.This research was supported in part by a Google PhD Fellowship, in part by awards from Android Security's ASPIRE program and from Google Research, and in part by a European Research Council (ERC) Consolidator Grant for the project "RustBelt", funded under the European Union’s Horizon 2020 Framework Programme (grant agreement no. 683289)
    corecore