14 research outputs found

    The Usability of Pragmatic Communication in Regular Expression Synthesis

    Full text link
    Programming-by-example (PBE) systems aim to alleviate the burden of programming. However, user-specified examples are often ambiguous, leaving multiple programs to satisfy the specification. Consequently, in most prior work, users have had to provide additional examples, particularly negative ones, to further constrain the search over compatible programs. Recent work resolves additional ambiguity by modeling program synthesis tasks as pragmatic communication, showing promising results on a graphics domain using a rudimentary user-study. We adapt pragmatic reasoning to a sub-domain of regular expressions and rigorously study its usability as a means of communication both with and without the ability to provide negative examples. Our user study (N=30) demonstrates that, with a pragmatic synthesizer, end-users can more successfully communicate a target regex using positive examples alone (95%) compared to using a non-pragmatic synthesizer (51%). Further, users can communicate more efficiently (57% fewer examples) with a pragmatic synthesizer compared to a non-pragmatic one

    Wiring Circuits Is Easy as {0,1,ω}, or Is It...

    Get PDF
    Quantitative Type-Systems support fine-grained reasoning about term usage in our programming languages. Hardware Design Languages are another style of language in which quantitative typing would be beneficial. When wiring components together we must ensure that there are no unused ports, dangling wires, or accidental fan-ins and fan-outs. Although many wire usage checks are detectable using static analysis tools, such as Verilator, quantitative typing supports making these extrinsic checks an intrinsic aspect of the type-system. With quantitative typing of bound terms, we can provide design-time checks that all wires and ports have been used, and ensure that all wiring decisions are explicitly made, and are neither implicit nor accidental. We showcase the use of quantitative types in hardware design languages by detailing how we can retrofit quantitative types onto SystemVerilog netlists, and the impact that such a quantitative type-system has when creating designs. Netlists are gate-level descriptions of hardware that are produced as the result of synthesis, and it is from these netlists that hardware is generated (fabless or fabbed). First, we present a simple structural type-system for a featherweight version of SystemVerilog netlists that demonstrates how we can type netlists using standard structural techniques, and what it means for netlists to be type-safe but still lead to ill-wired designs. We then detail how to retrofit the language with quantitative types, make the type-system sub-structural, and detail how our new type-safety result ensures that wires and ports are used once. Our ideas have been proven both practically and formally by realising our work in Idris2, through which we can construct a verified language implementation that can type-check existing designs. From this work we can look to promote quantitative typing back up the synthesis chain to a more comprehensive hardware description language; and to help develop new and better hardware description languages with quantitative typing

    Dag-calculus: a calculus for parallel computation

    Get PDF
    International audienceIncreasing availability of multicore systems has led to greater focus on the design and implementation of languages for writing parallel programs. Such languages support various abstractions for parallelism, such as fork-join, async-finish, futures. While they may seem similar, these abstractions lead to different semantics, language design and implementation decisions, and can significantly impact the performance of end-user applications. In this paper, we consider the question of whether it would be possible to unify various paradigms of parallel computing. To this end, we propose a calculus, called dag calculus, that can encode fork-join, async-finish, and futures, and possibly others. We describe dag calculus and its semantics, establish translations from the afore-mentioned paradigms into dag calculus. These translations establish that dag calculus is sufficiently powerful for encoding programs written in prevailing paradigms of parallelism. We present concurrent algorithms and data structures for realizing dag calculus on multi-core hardware and prove that the proposed techniques are consistent with the semantics. Finally, we present an implementation of the calculus and evaluate it empirically by comparing its performance to highly optimized code from prior work. The results show that the calculus is expressive and that it competes well with, and sometimes outperforms, the state of the art

    Modular Collaborative Program Analysis

    Get PDF
    With our world increasingly relying on computers, it is important to ensure the quality, correctness, security, and performance of software systems. Static analysis that computes properties of computer programs without executing them has been an important method to achieve this for decades. However, static analysis faces major chal- lenges in increasingly complex programming languages and software systems and increasing and sometimes conflicting demands for soundness, precision, and scalability. In order to cope with these challenges, it is necessary to build static analyses for complex problems from small, independent, yet collaborating modules that can be developed in isolation and combined in a plug-and-play manner. So far, no generic architecture to implement and combine a broad range of dissimilar static analyses exists. The goal of this thesis is thus to design such an architecture and implement it as a generic framework for developing modular, collaborative static analyses. We use several, diverse case-study analyses from which we systematically derive requirements to guide the design of the framework. Based on this, we propose the use of a blackboard-architecture style collaboration of analyses that we implement in the OPAL framework. We also develop a formal model of our architectures core concepts and show how it enables freely composing analyses while retaining their soundness guarantees. We showcase and evaluate our architecture using the case-study analyses, each of which shows how important and complex problems of static analysis can be addressed using a modular, collaborative implementation style. In particular, we show how a modular architecture for the construction of call graphs ensures consistent soundness of different algorithms. We show how modular analyses for different aspects of immutability mutually benefit each other. Finally, we show how the analysis of method purity can benefit from the use of other complex analyses in a collaborative manner and from exchanging different analysis implementations that exhibit different characteristics. Each of these case studies improves over the respective state of the art in terms of soundness, precision, and/or scalability and shows how our architecture enables experimenting with and fine-tuning trade-offs between these qualities

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution

    Type Checking and Whole-program Inference for Value Range Analysis

    Get PDF
    Value range analysis is important in many software domains for ensuring the safety and reliability of a program and is a crucial facet in software development. The resulting information can be used in optimizations such as redundancy elimination, dead code elimination, instruction selection, and improve the safety of programs. This thesis explores the use of static analysis with type systems for value range analysis. Properly formalized type systems can provide mathematical guarantees for the correctness of a program at compile time. This thesis presents (1) a novel type system, the Narrowing and Widening Checker, (2) a whole-program type inference, the Value Inference for Integral Values, (3) a units-of-measurement type system, PUnits, and (4) an improved algorithm to statically analyze the data-flow of programs. The Narrowing and Widening Checker is a type system that prevents loss of information during narrowing conversion of primitive integral data types and automatically distinguishes the signedness of variables to eliminate the ambiguity of a widening conversion from type \!byte! and \!short! to type \!int!. This additional type system ensures soundness in programs by restricting operations that violate the defined type rules. While type checking verifies whether the given type declarations are consistent with their use, type inference automatically finds the properties at each location in the program, and reduces the annotation burden of the developer. The Value Inference for Integral Values is a constraint-based whole-program type inference for integral analysis. It supports the relevant type qualifiers used by the Narrowing and Widening type system, and reduces the annotation burden when using the Narrowing and Widening Checker. Value Inference can infer types in two modes: (1) ensure a valid integral typing exists, and (2) annotate a program with precise and relevant types. Annotation mode allows human inspection and is essential since having a valid typing does not guarantee that the inferred specification expresses design intent. PUnits is a type system for expressive units of measurement types and a precise, whole-program inference approach for these types. This thesis presents a new type qualifier for this type system to handle cases where the method return and method parameter type are context-sensitive to the method receiver type. This thesis also discusses the related work and the benefits and trade-offs of using PUnits versus existing Java unit libraries, and demonstrates how PUnits can enable Java developers to reap the performance benefits of using primitive types instead of abstract data types for unit-wise consistent scientific computations in real-world projects. The Dataflow Framework is a data-flow analysis for Java used to evaluate the values at each program location. Data-flow analysis is considered a terminating, imprecise abstract interpretation of a program and many false-positives are issued by the Narrowing and Widening Checker due to its imprecision. Three improvements to the algorithm in the framework are presented to increase the precision of the analysis: (1) implementing a dead-branch analysis, (2) proposing a path-sensitive analysis, and (3) discussing how loop precision can be improved. The Narrowing and Widening Checker is evaluated on 22 of the Apache Commons projects with a total of 224k lines of code. Out of these projects, 18 projects failed with 717 errors. The Value Inference for Integral Values is evaluated on these 18 Apache Commons projects. Out of these projects, 5 projects are successfully evaluated to SAT and the Value Inference inferred 10639 annotations. The 13 projects that are evaluated to UNSAT are manually examined and all of them contain a real narrowing error. Manual annotations are added to 5 of these projects to resolve the reported errors. In these 5 projects, the Narrowing and Widening Checker detects 69 real errors and 26 false-positives, with a false-positive rate of 37.7\%. The type system performs adequately with a compilation time overhead of 5.188x for the Narrowing and Widening Checker and 24.43x for the Value Inference. These projects are then evaluated with the addition of dead-branch analysis to the framework; the additional evaluation time is negligible. Its performance is suitable for use in a real-world software development environment. All the presented type systems build on techniques from type qualifier systems and constraint-based type inference. Our implementation and evaluation of these type systems show that these techniques are necessary and are effective in ensuring the correctness of real-world programs

    Programming Languages and Systems

    Get PDF
    This open access book constitutes the proceedings of the 29th European Symposium on Programming, ESOP 2020, which was planned to take place in Dublin, Ireland, in April 2020, as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The actual ETAPS 2020 meeting was postponed due to the Corona pandemic. The papers deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems
    corecore