5,829 research outputs found

    Finding The Lazy Programmer's Bugs

    Get PDF
    Traditionally developers and testers created huge numbers of explicit tests, enumerating interesting cases, perhaps biased by what they believe to be the current boundary conditions of the function being tested. Or at least, they were supposed to. A major step forward was the development of property testing. Property testing requires the user to write a few functional properties that are used to generate tests, and requires an external library or tool to create test data for the tests. As such many thousands of tests can be created for a single property. For the purely functional programming language Haskell there are several such libraries; for example QuickCheck [CH00], SmallCheck and Lazy SmallCheck [RNL08]. Unfortunately, property testing still requires the user to write explicit tests. Fortunately, we note there are already many implicit tests present in programs. Developers may throw assertion errors, or the compiler may silently insert runtime exceptions for incomplete pattern matches. We attempt to automate the testing process using these implicit tests. Our contributions are in four main areas: (1) We have developed algorithms to automatically infer appropriate constructors and functions needed to generate test data without requiring additional programmer work or annotations. (2) To combine the constructors and functions into test expressions we take advantage of Haskell's lazy evaluation semantics by applying the techniques of needed narrowing and lazy instantiation to guide generation. (3) We keep the type of test data at its most general, in order to prevent committing too early to monomorphic types that cause needless wasted tests. (4) We have developed novel ways of creating Haskell case expressions to inspect elements inside returned data structures, in order to discover exceptions that may be hidden by laziness, and to make our test data generation algorithm more expressive. In order to validate our claims, we have implemented these techniques in Irulan, a fully automatic tool for generating systematic black-box unit tests for Haskell library code. We have designed Irulan to generate high coverage test suites and detect common programming errors in the process

    Identifying Implementation Bugs in Machine Learning based Image Classifiers using Metamorphic Testing

    Full text link
    We have recently witnessed tremendous success of Machine Learning (ML) in practical applications. Computer vision, speech recognition and language translation have all seen a near human level performance. We expect, in the near future, most business applications will have some form of ML. However, testing such applications is extremely challenging and would be very expensive if we follow today's methodologies. In this work, we present an articulation of the challenges in testing ML based applications. We then present our solution approach, based on the concept of Metamorphic Testing, which aims to identify implementation bugs in ML based image classifiers. We have developed metamorphic relations for an application based on Support Vector Machine and a Deep Learning based application. Empirical validation showed that our approach was able to catch 71% of the implementation bugs in the ML applications.Comment: Published at 27th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2018

    The lambda-mu-T-calculus

    Get PDF
    Calculi with control operators have been studied as extensions of simple type theory. Real programming languages contain datatypes, so to really understand control operators, one should also include these in the calculus. As a first step in that direction, we introduce lambda-mu-T, a combination of Parigot's lambda-mu-calculus and G\"odel's T, to extend a calculus with control operators with a datatype of natural numbers with a primitive recursor. We consider the problem of confluence on raw terms, and that of strong normalization for the well-typed terms. Observing some problems with extending the proofs of Baba at al. and Parigot's original confluence proof, we provide new, and improved, proofs of confluence (by complete developments) and strong normalization (by reducibility and a postponement argument) for our system. We conclude with some remarks about extensions, choices, and prospects for an improved presentation

    Fexprs as the basis of Lisp function application; or, $vau: the ultimate abstraction

    Get PDF
    Abstraction creates custom programming languages that facilitate programming for specific problem domains. It is traditionally partitioned according to a two-phase model of program evaluation, into syntactic abstraction enacted at translation time, and semantic abstraction enacted at run time. Abstractions pigeon-holed into one phase cannot interact freely with those in the other, since they are required to occur at logically distinct times. Fexprs are a Lisp device that subsumes the capabilities of syntactic abstraction, but is enacted at run-time, thus eliminating the phase barrier between abstractions. Lisps of recent decades have avoided fexprs because of semantic ill-behavedness that accompanied fexprs in the dynamically scoped Lisps of the 1960s and 70s. This dissertation contends that the severe difficulties attendant on fexprs in the past are not essential, and can be overcome by judicious coordination with other elements of language design. In particular, fexprs can form the basis for a simple, well-behaved Scheme-like language, subsuming traditional abstractions without a multi-phase model of evaluation. The thesis is supported by a new Scheme-like language called Kernel, created for this work, in which each Scheme-style procedure consists of a wrapper that induces evaluation of operands, around a fexpr that acts on the resulting arguments. This arrangement enables Kernel to use a simple direct style of selectively evaluating subexpressions, in place of most Lisps\u27 indirect quasiquotation style of selectively suppressing subexpression evaluation. The semantics of Kernel are treated through a new family of formal calculi, introduced here, called vau calculi. Vau calculi use direct subexpression-evaluation style to extend lambda calculus, eliminating a long-standing incompatibility between lambda calculus and fexprs that would otherwise trivialize their equational theories. The impure vau calculi introduce non-functional binding constructs and unconventional forms of substitution. This strategy avoids a difficulty of Felleisen\u27s lambda-v-CS calculus, which modeled impure control and state using a partially non-compatible reduction relation, and therefore only approximated the Church-Rosser and Plotkin\u27s Correspondence Theorems. The strategy here is supported by an abstract class of Regular Substitutive Reduction Systems, generalizing Klop\u27s Regular Combinatory Reduction Systems

    The Free Will Theorem

    Full text link
    On the basis of three physical axioms, we prove that if the choice of a particular type of spin 1 experiment is not a function of the information accessible to the experimenters, then its outcome is equally not a function of the information accessible to the particles. We show that this result is robust, and deduce that neither hidden variable theories nor mechanisms of the GRW type for wave function collapse can be made relativistic. We also establish the consistency of our axioms and discuss the philosophical implications.Comment: 31 pages, 6figure
    corecore