10 research outputs found

    The End of History? Using a Proof Assistant to Replace Language Design with Library Design

    Get PDF
    Functionality of software systems has exploded in part because of advances in programming-language support for packaging reusable functionality as libraries. Developers benefit from the uniformity that comes of exposing many interfaces in the same language, as opposed to stringing together hodgepodges of command-line tools. Domain-specific languages may be viewed as an evolution of the power of reusable interfaces, when those interfaces become so flexible as to deserve to be called programming languages. However, common approaches to domain-specific languages give up many of the hard-won advantages of library-building in a rich common language, and even the traditional approach poses significant challenges in learning new APIs. We suggest that instead of continuing to develop new domain-specific languages, our community should embrace library-based ecosystems within very expressive languages that mix programming and theorem proving. Our prototype framework Fiat, a library for the Coq proof assistant, turns languages into easily comprehensible libraries via the key idea of modularizing functionality and performance away from each other, the former via macros that desugar into higher-order logic and the latter via optimization scripts that derive efficient code from logical programs

    Program synthesis from polymorphic refinement types

    Get PDF
    We present a method for synthesizing recursive functions that provably satisfy a given specification in the form of a polymorphic refinement type. We observe that such specifications are particularly suitable for program synthesis for two reasons. First, they offer a unique combination of expressive power and decidability, which enables automatic verification—and hence synthesis—of nontrivial programs. Second, a type-based specification for a program can often be effectively decomposed into independent specifications for its components, causing the synthesizer to consider fewer component combinations and leading to a combinatorial reduction in the size of the search space. At the core of our synthesis procedure is a newalgorithm for refinement type checking, which supports specification decomposition. We have evaluated our prototype implementation on a large set of synthesis problems and found that it exceeds the state of the art in terms of both scalability and usability. The tool was able to synthesize more complex programs than those reported in prior work (several sorting algorithms and operations on balanced search trees), as well as most of the benchmarks tackled by existing synthesizers, often starting from a more concise and intuitive user input.National Science Foundation (U.S.) (Grant CCF-1438969)National Science Foundation (U.S.) (Grant CCF-1139056)United States. Defense Advanced Research Projects Agency (Grant FA8750-14-2-0242

    Synthesizing Iterators from Abstraction Functions

    Get PDF
    A technique for synthesizing iterators from declarative abstraction functions written in a relational logic specification language is described. The logic includes a transitive closure operator that makes it convenient for expressing reachability queries on linked data structures. Some optimizations, including tuple elimination, iterator flattening, and traversal state reduction, are used to improve performance of the generated iterators. A case study demonstrates that most of the iterators in the widely used JDK Collections classes can be replaced with code synthesized from declarative abstraction functions. These synthesized iterators perform competitively with the hand-written originals. In a user study the synthesized iterators always passed more test cases than the hand-written ones, were almost always as efficient, usually took less programmer effort, and were the qualitative preference of all participants who provided free-form comments

    Alloy*: A Higher-Order Relational Constraint Solver

    Get PDF
    The last decade has seen a dramatic growth in the use of constraint solvers as a computational mechanism, not only for analysis and synthesis of software, but also at runtime. Solvers are available for a variety of logics but are generally restricted to first-order formulas. Some tasks, however, most notably those involving synthesis, are inherently higher order; these are typically handled by embedding a first-order solver (such as a SAT or SMT solver) in a domain-specific algorithm. Using strategies similar to those used in such algorithms, we show how to extend a first-order solver (in this case Kodkod, a model finder for relational logic used as the engine of the Alloy Analyzer) so that it can handle quantifications over higher-order structures. The resulting solver is sufficiently general that it can be applied to a range of problems; it is higher order, so that it can be applied directly, without embedding in another algorithm; and it performs well enough to be competitive with specialized tools on standard benchmarks. Although the approach is demonstrated for a particular relational logic, the principles behind it could be applied to other first-order solvers. Just as the identification of first-order solvers as reusable backends advanced the performance of specialized tools and simplified their architecture, factoring out higher-ordersolvers may bring similar benefits to a new class of tools

    Model-based, event-driven programming paradigm for interactive web applications

    Get PDF
    Applications are increasingly distributed and event-driven. Advances in web frameworks have made it easier to program standalone servers and their clients, but these applications remain hard to write. A model-based programming paradigm is proposed that allows a programmer to represent a distributed application as if it were a simple sequential program, with atomic actions updating a single, shared global state. A runtime environment executes the program on a collection of clients and servers, automatically handling (and hiding from the programmer) complications such as network communication (including server push), serialization, concurrency and races, persistent storage of data, and queuing and coordination of events.National Science Foundation (U.S.) (Grant CCF-1138967)National Science Foundation (U.S.) (Grant CCF-1012759)National Science Foundation (U.S.) (Grant CCF-0746856

    Trident: Controlling Side Effects in Automated Program Repair

    Get PDF
    The goal of program repair is to eliminate a bug in a given program by automatically modifying its source code. The majority of real-world software is written in imperative programming languages. Each function or expression in imperative code may have side effects, observable effects beyond returning a value. Existing program repair approaches have a limited ability to handle side effects. Previous test-driven semantic repair approaches only synthesise patches without side effects. Heuristic repair approaches generate patches with side effects only if suitable code fragments exist in the program or a database of repair patterns, or can be derived from training data. This work introduces Trident, the first test-driven program repair approach that synthesizes patches with side effects without relying on the plastic surgery hypothesis, a database of patterns, or training data. Trident relies on an interplay of several parts. First, it infers a specification for synthesising side-effected patches using symbolic execution with a custom state merging strategy that alleviates path explosion due to side effects. Second, it uses a novel component-based patch synthesis approach that supports lvalues, values that appear on the left-hand sides of assignments. In an evaluation on open-source projects, Trident successfully repaired 6 out of 10 real bugs that require insertion of new code with side effects, which previous techniques do not therefore repair. Evaluated on the ManyBugs benchmark, Trident successfully repaired two new bugs that previous approaches could not. Adding patches with side effects to the search space can exacerbate test-overfitting. We experimentally demonstrate that the simple heuristic of preferring patches with the fewest side effects alleviates the problem. An evaluation on a large number of smaller programs shows that this strategy reduces test-overfitting caused by side-effects, increasing the rate of correct patches from 33.3% to 58.3%

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Program extrapolation with jennisys

    No full text
    corecore