13 research outputs found
Timing Analysis of Event-Driven Programs with Directed Testing
Accurately estimating the worst-case execution time (WCET) of real-time event-driven software is crucial. For example, NASA\u27s study of unintended acceleration in Toyota vehicles highlights poor support in timing analysis for event-driven code, which could put human life in danger. WCET occurs during the longest possible execution path in a program. Static analysis produces safe but overestimated measurements. Dynamic analysis, on other hand, measures actual execution times of code under a test suite. Its performance depends on the branch coverage, which itself is sensitive to scheduling of events. Thus dynamic analysis often underestimates the WCET. We present a new dynamic approach called event-driven directed testing. Our approach combines aspects of prior random-testing techniques devised for event-driven code with the directed testing method applied to sequential code. The aim is to come up with complex event sequences and choices of parameters for individual events that might result in execution times closer to the true WCET. Our experiments show that, compared to random testing, genetic algorithms, and traditional directed testing, we achieve significantly better branch coverage and longer WCET
A Study of the Learnability of Relational Properties: Model Counting Meets Machine Learning (MCML)
This paper introduces the MCML approach for empirically studying the
learnability of relational properties that can be expressed in the well-known
software design language Alloy. A key novelty of MCML is quantification of the
performance of and semantic differences among trained machine learning (ML)
models, specifically decision trees, with respect to entire (bounded) input
spaces, and not just for given training and test datasets (as is the common
practice). MCML reduces the quantification problems to the classic complexity
theory problem of model counting, and employs state-of-the-art model counters.
The results show that relatively simple ML models can achieve surprisingly high
performance (accuracy and F1-score) when evaluated in the common setting of
using training and test datasets - even when the training dataset is much
smaller than the test dataset - indicating the seeming simplicity of learning
relational properties. However, MCML metrics based on model counting show that
the performance can degrade substantially when tested against the entire
(bounded) input space, indicating the high complexity of precisely learning
these properties, and the usefulness of model counting in quantifying the true
performance
Analogy-Making as a Core Primitive in the Software Engineering Toolbox
An analogy is an identification of structural similarities and
correspondences between two objects. Computational models of analogy making
have been studied extensively in the field of cognitive science to better
understand high-level human cognition. For instance, Melanie Mitchell and
Douglas Hofstadter sought to better understand high-level perception by
developing the Copycat algorithm for completing analogies between letter
sequences. In this paper, we argue that analogy making should be seen as a core
primitive in software engineering. We motivate this argument by showing how
complex software engineering problems such as program understanding and
source-code transformation learning can be reduced to an instance of the
analogy-making problem. We demonstrate this idea using Sifter, a new
analogy-making algorithm suitable for software engineering applications that
adapts and extends ideas from Copycat. In particular, Sifter reduces
analogy-making to searching for a sequence of update rule applications. Sifter
uses a novel representation for mathematical structures capable of effectively
representing the wide variety of information embedded in software. We conclude
by listing major areas of future work for Sifter and analogy-making in software
engineering.Comment: Conference paper at SPLASH 'Onward!' 2020. Code is available at
https://github.com/95616ARG/sifte
Recommended from our members
From Validation to Automated Repair & Beyond with Constraint Solving
Tremendous amounts of software engineering efforts go into the validation of software. Developers rely on many forms of software validation, from unit tests to assertions and formal specifications, dynamic contract checking to static formal verification, to ensure the reliability of software packages. Traditionally, however, the benefits seem to stop there, at checking whether there are problems. But once problems have been detected, those spent validation efforts play no role in the challenging task of debugging those problems, a task which requires manual, time-consuming, and error-prone developer efforts.The key insight of this dissertation is that we can leverage the efforts that developers currently put into the validation of software, such as unit tests and formal verification, to get software engineering benefits that can go beyond validation, including automated software repair. Validation mechanisms can be elevated to this status using modern constraint solving, a technology that is already in use for the purpose of formal verification of software. I present three novel and practical instances of this idea, that I was able to identify by focusing on particular domains and scenarios. The first work, used in development, builds on unit testing as the most common form of validation, and exploits a constraint solving method to automatically fix a certain class of bugs in the source code (offline repair). The second builds on dynamic, specification-based validation as in assertions and contracts used during development and testing, and applies it to deployed software to make it robust to unforeseen run-time failures by falling back to constraint solving (online repair). Finally, I use specifications and constraint solving to improve an existing validation methodology in test-driven development, used to enable testing when part of the depended upon software is unavailable or hard to set up