1,404 research outputs found

    Soft Contract Verification

    Full text link
    Behavioral software contracts are a widely used mechanism for governing the flow of values between components. However, run-time monitoring and enforcement of contracts imposes significant overhead and delays discovery of faulty components to run-time. To overcome these issues, we present soft contract verification, which aims to statically prove either complete or partial contract correctness of components, written in an untyped, higher-order language with first-class contracts. Our approach uses higher-order symbolic execution, leveraging contracts as a source of symbolic values including unknown behavioral values, and employs an updatable heap of contract invariants to reason about flow-sensitive facts. We prove the symbolic execution soundly approximates the dynamic semantics and that verified programs can't be blamed. The approach is able to analyze first-class contracts, recursive data structures, unknown functions, and control-flow-sensitive refinements of values, which are all idiomatic in dynamic languages. It makes effective use of an off-the-shelf solver to decide problems without heavy encodings. The approach is competitive with a wide range of existing tools---including type systems, flow analyzers, and model checkers---on their own benchmarks.Comment: ICFP '14, September 1-6, 2014, Gothenburg, Swede

    Course Portfolio for Math 309: Introduction to Mathematical Proofs - A Peer Review of Teaching Project Benchmark Portfolio

    Get PDF
    In this course portfolio, I examine dimensions of an active learning curriculum developed for a sophomore-level undergraduate course serving as an introduction to mathematical proofs. Prominent course goals include developing effective practices for communicating mathematics using formal language, learning to read, comprehend, and evaluate the validity of mathematical proofs, and practicing to write rigorous and concise mathematical proofs. I explore a new piece of collaborative annotation software called Perusall to help students read and understand mathematics together, and I analyze mastery level grading scales across exams throughout the semester. The portfolio also contains information about the structure and syllabus for the course, the exams given, and samples of student work and collaborative annotations

    Generating Extended Resolution Proofs with a BDD-Based SAT Solver

    Full text link
    In 2006, Biere, Jussila, and Sinz made the key observation that the underlying logic behind algorithms for constructing Reduced, Ordered Binary Decision Diagrams (BDDs) can be encoded as steps in a proof in the extended resolution logical framework. Through this, a BDD-based Boolean satisfiability (SAT) solver can generate a checkable proof of unsatisfiability for a set of clauses. Such a proof indicates that the formula is truly unsatisfiable without requiring the user to trust the BDD package or the SAT solver built on top of it. We extend their work to enable arbitrary existential quantification of the formula variables, a critical capability for BDD-based SAT solvers. We demonstrate the utility of this approach by applying a prototype solver to several problems that are very challenging for search-based SAT solvers, obtaining polynomially sized proofs on benchmarks for parity formulas, as well as the Urquhart, mutilated chessboard, and pigeonhole problems.Comment: Extended version of paper published at TACAS 202

    Program Improvement by Automatic Redistribution of Intermediate Results

    Get PDF
    This paper was originally a Ph.D. thesis proposal.The problem of automatically improving the performance of computer programs has many facets. A common source of program inefficiency is the use of abstraction techniques in program design: general tools used in a specific context often do unnecessary or redundant work. Examples include needless copy operations, redundant subexpressions, multiple traversals of the same datastructure and maintenance of overly complex data invariants. I propose to focus on one broadly applicable way of improving a program's performance: redistributing intermediate results so that computation can be avoided. I hope to demonstrate that this is a basic principle of optimization from which many of the current approaches to optimization may be derived. I propose to implement a system that automatically finds and exploits opportunities for redistribution in a given program. In addition to the program source, the system will accept an explanation of correctness and purpose of the code. Beyond the specific task of program improvement, I anticipate that the research will contribute to our understanding of the design and explanatory structure of programs. Major results will include (1) definition and manipulation of representation of correctness and purpose of a program's implementation, and (2) definition, construction, and use of a representation of a program's dynamic behavior.MIT Artificial Intelligence Laborator

    Detection of periodicity in functional time series

    Full text link
    We derive several tests for the presence of a periodic component in a time series of functions. We consider both the traditional setting in which the periodic functional signal is contaminated by functional white noise, and a more general setting of a contaminating process which is weakly dependent. Several forms of the periodic component are considered. Our tests are motivated by the likelihood principle and fall into two broad categories, which we term multivariate and fully functional. Overall, for the functional series that motivate this research, the fully functional tests exhibit a superior balance of size and power. Asymptotic null distributions of all tests are derived and their consistency is established. Their finite sample performance is examined and compared by numerical studies and application to pollution data

    Towards the automation of mathematical reasoning

    Get PDF

    What Ulysses Requires

    Get PDF

    Algorithms for nonnegative matrix factorization with the beta-divergence

    Get PDF
    This paper describes algorithms for nonnegative matrix factorization (NMF) with the beta-divergence (beta-NMF). The beta-divergence is a family of cost functions parametrized by a single shape parameter beta that takes the Euclidean distance, the Kullback-Leibler divergence and the Itakura-Saito divergence as special cases (beta = 2,1,0, respectively). The proposed algorithms are based on a surrogate auxiliary function (a local majorization of the criterion function). We first describe a majorization-minimization (MM) algorithm that leads to multiplicative updates, which differ from standard heuristic multiplicative updates by a beta-dependent power exponent. The monotonicity of the heuristic algorithm can however be proven for beta in (0,1) using the proposed auxiliary function. Then we introduce the concept of majorization-equalization (ME) algorithm which produces updates that move along constant level sets of the auxiliary function and lead to larger steps than MM. Simulations on synthetic and real data illustrate the faster convergence of the ME approach. The paper also describes how the proposed algorithms can be adapted to two common variants of NMF : penalized NMF (i.e., when a penalty function of the factors is added to the criterion function) and convex-NMF (when the dictionary is assumed to belong to a known subspace).Comment: \`a para\^itre dans Neural Computatio
    corecore