3 research outputs found
Recommended from our members
MKorat : a novel approach for memorizing the Korat search and some potential applications
Writing logical constraints that describe properties of desired inputs enables an effective approach for systematic software testing, which can find many bugs. The key problem in systematic constraint-based testing is efficiently exploring very large spaces of all possible inputs to enumerate desired valid inputs. The Korat technique provides an effective solution to this problem. Korat uses desired input properties written as imperative predicates and implements a backtracking search that prunes large parts of the input space and enumerates all non-isomorphic inputs within a given bound on input size. Despite the effectiveness of Korat’s pruning, systematically creating and running large numbers of tests can be costly in practice. Previous work introduced parallel test generation and execution using Korat to make it more practical. We build on a specific algorithm, SEQ-ON, introduced in previous work for equi-distancing candidate inputs, which allows re-execution of Korat for input generation using parallel workers with evenly distributed workload. Our key insight is that the Korat search typically encounters many consecutive candidates that are all invalid inputs and such invalid ranges of candidates can be memoized succinctly to optimize re-execution of Korat. We introduce a novel approach for memoizing Korat’s checking of consecutive invalid candidates, embody the approach into three new techniques based on SEQ-ON, evaluate the techniques using a standard suite of data structure subjects to show the efficacy of our approach, and show some potential applications of it in two new application domains for Korat. We believe our work opens a promising new direction to optimize solving of imperative constraints and using them in novel application domains.Electrical and Computer Engineerin
Analyzing the Uses of a Software Modeling Tool
While a lot of progress has been made in improving analyses and tools that aid software development, less effort has been spent on studying how such tools are commonly used in practice. A study into a tool’s usage is important not only because it can help improve the tool’s usability but also because it can help improve the tool’s underlying analysis technology in a common usage scenario. This paper presents a study that explores how (beginner) users work with the Alloy Analyzer, a tool for automatic analysis of software models written in Alloy, a first-order, declarative language. Alloy has been successfully used in research and teaching for several years, but there has been no study of how users interact with the analyzer. We have modified the analyzer to log (some of) its interactions with the user. Using this modified analyzer, 11 students in two graduate classes formulated their Alloy models to solve a problem set (involving two problems, each with one model). Our analysis of the resulting logs (total of 68 analyzer sessions) shows several interesting observations; based on them, we propose how to improve the analyzer, both the performance of analyses and the user interaction. Specifically, we show that: (i) users often perform consecutive analyses with slightly different models, and thus incremental analysis can speed up the interaction; (ii) users ’ interaction with the analyzer is sometimes predictable, and akin to continuous compilation, the analyzer can precompute the result of a future action while the user is editing the model; and (iii) (beginner) users can naturally develop semantically equivalent models that have significantly different analysis time, so it is useful to study manual and automatic model transformations that can improve performance