15 research outputs found
Technical Reports (2004 - 2009)
Authors of Technical Reports (2005-2009): Choueiry, Berthe Cohen, Myra Deogun, Jitender Dwyer, Matthew Elbaum, Sebastian Goddard, Steve Henninger, Scott Jiang, Hong Lu, Ying Ramamurthy, Byrav Rothermel, Gregg Scott, Stephen Seth, Sharad Soh, Leen-Kiat Srisa-an, Witty Swanson, David Variyam, Vinodchandran Wang, Jun Xu, Lison
Swarm testing
ManuscriptSwarm testing is a novel and inexpensive way to improve the diversity of test cases generated during random testing. Increased diversity leads to improved coverage and fault detection. In swarm testing, the usual practice of potentially including all features in every test case is abandoned. Rather, a large "swarm" of randomly generated configurations, each of which omits some features, is used, with configurations receiving equal resources. We have identified two mechanisms by which feature omission leads to better exploration of a system's state space. First, some features actively prevent the system from executing interesting behaviors; e.g., "pop" calls may prevent a stack data structure from executing a bug in its overflow detection logic. Second, even when there is no active suppression of behaviors, test features compete for space in each test, limiting the depth to which logic driven by features can be explored. Experimental results show that swarm testing increases coverage and can improve fault detection dramatically; for example, in a week of testing it found 42% more distinct ways to crash a collection of C compilers than did the heavily hand-tuned default configuration of a random tester
Recommended from our members
MKorat : a novel approach for memorizing the Korat search and some potential applications
Writing logical constraints that describe properties of desired inputs enables an effective approach for systematic software testing, which can find many bugs. The key problem in systematic constraint-based testing is efficiently exploring very large spaces of all possible inputs to enumerate desired valid inputs. The Korat technique provides an effective solution to this problem. Korat uses desired input properties written as imperative predicates and implements a backtracking search that prunes large parts of the input space and enumerates all non-isomorphic inputs within a given bound on input size. Despite the effectiveness of Korat’s pruning, systematically creating and running large numbers of tests can be costly in practice. Previous work introduced parallel test generation and execution using Korat to make it more practical. We build on a specific algorithm, SEQ-ON, introduced in previous work for equi-distancing candidate inputs, which allows re-execution of Korat for input generation using parallel workers with evenly distributed workload. Our key insight is that the Korat search typically encounters many consecutive candidates that are all invalid inputs and such invalid ranges of candidates can be memoized succinctly to optimize re-execution of Korat. We introduce a novel approach for memoizing Korat’s checking of consecutive invalid candidates, embody the approach into three new techniques based on SEQ-ON, evaluate the techniques using a standard suite of data structure subjects to show the efficacy of our approach, and show some potential applications of it in two new application domains for Korat. We believe our work opens a promising new direction to optimize solving of imperative constraints and using them in novel application domains.Electrical and Computer Engineerin
Leveraging Formal Methods and Fuzzing to Verify Security and Reliability Properties of Large-Scale High-Consequence Systems
Formal methods describe a class of system analysis techniques that seek to prove specific propertiesabout analyzed designs, or locate flaws compromising those properties. As an analysis capability,these techniques are the subject of increased interest fromboth internal and external customersof Sandia National Laboratories. Given this lab's other areas of expertise, Sandia is uniquelypositioned to advance the state-of-the-art with respect toseveral research and application areaswithin formal methods. This research project was a one-yeareffort funded by Sandia's CyberSecurity S&T Investment Area in its Laboratory Directed Research&Development program toinvestigate the opportunities for formal methods to impactSandia's present mission areas, morefully understand the needs of the research community in the area of formal methods and whereSandia can contribute, and clarify from those potential research paths those that would best advancethe mission-area interests of Sandia. The accomplishmentsfrom this project reinforce the utilityof formal methods in Sandia, particularly in areas relevantto Cyber Security, and set the stagefor continued Sandia investments to ensure this capabilityis utilized and advanced within thislaboratory to serve the national interest.
Recommended from our members
Swarm Verification Techniques
The range of verification problems that can be solved with logic model checking tools has increased significantly in the last few decades. This increase in capability is based on algorithmic advances and new theoretical insights, but it has also benefitted from the steady increase in processing speeds and main memory sizes on standard computers. The steady increase in processing speeds, though, ended when chip-makers started redirecting their efforts to the development of multicore systems. For the near-term future, we can anticipate the appearance of systems with large numbers of CPU cores, but without matching increases in clock-speeds. We will describe a model checking strategy that can allow us to leverage this trend and that allows us to tackle significantly larger problem sizes than before.©2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
This is the publisher’s final pdf. The published article is copyrighted by IEEE Computer Society and can be found at: http://www.computer.org/portal/web/publicationsKeywords: Distributed algorithms, Software engineering tools and techniques, Logic model checking, Software verificationKeywords: Distributed algorithms, Software engineering tools and techniques, Logic model checking, Software verificatio
Spatio-temporal logics for verification and control of networked systems
Emergent behaviors in networks of locally interacting dynamical systems have been a topic of great interest in recent years. As the complexity of these systems increases, so does the range of emergent properties that they exhibit. Due to recent developments in areas such as synthetic biology and multi-agent robotics, there has been a growing necessity for a formal and automated framework for studying global behaviors in such networks. We propose a formal methods approach for describing, verifying, and synthesizing complex spatial and temporal network properties.
Two novel logics are introduced in the first part of this dissertation: Tree Spatial Superposition Logic (TSSL) and Spatial Temporal Logic (SpaTeL). The former is a purely spatial logic capable of formally describing global spatial patterns. The latter is a temporal extension of TSSL and is ideal for expressing how patterns evolve over time. We demonstrate how machine learning techniques can be utilized to learn logical descriptors from labeled and unlabeled system outputs. Moreover, these logics are equipped with quantitative semantics and thus provide a metric for distance to satisfaction for randomly generated system trajectories. We illustrate how this metric is used in a statistical model checking framework for verification of networks of stochastic systems.
The parameter synthesis problem is considered in the second part, where the goal is to determine static system parameters that lead to the emergence of desired global behaviors. We use quantitative semantics to formulate optimization procedures with the purpose of tuning system inputs. Particle swarm optimization is employed to efficiently solve these optimization problems, and the efficacy of this framework is demonstrated in two applications: biological cell networks and smart power grids.
The focus of the third part is the control synthesis problem, where the objective is to find time-varying control strategies. We propose two approaches to solve this problem: an exact solution based on mixed integer linear programming, and an approximate solution based on gradient descent. These algorithms are not restricted to the logics introduced in this dissertation and can be applied to other existing logics in the literature. Finally, the capabilities of our framework are shown in the context of multi-agent robotics and robotic swarms
Using Software Model Checking for Software Certification
Software certification is defined as the process of independently confirming that a system or component complies with its specified requirements and is acceptable for use. It consists of the following steps: (1) the software producer subjects her software to rigorous testing and submits for certification, among other documents, evidence that the software has been thoroughly verified, and (2) the certifier evaluates the completeness of the verification and confirms that the software meets its specifications. The certification process is typically a manual evaluation of thousands of pages of documents that the software producer submits. Moreover, most of the current certification techniques focus on certifying testing results, but there is an increase in using formal methods to verify software. Model checking is a formal verification method that systematically explores the entire execution state space of a software program to ensure that a property is satisfied in every program state.
As the field of model checking matures, there is a growing interest in its use for verification. In fact, several industrial-sized software projects have used model checking for verification, and there has been an increased push for techniques, preferably automated, to certify model checking results. Motivated by these challenges in certification, we have developed a set of automated techniques to certify model-checking results.
One technique, called search-carrying code (SCC), uses information collected by a model checker during the verification of a program to speed up the certification of that program. In SCC, the software producer's model checker performs an exhaustive search of a program's state space and creates a search script that acts as a certificate of verification. The certifier's model checker uses the search script to partition its search task into a number of smaller, roughly balanced tasks that can be distributed to parallel model checkers, thereby using parallelization to speed up certification.
When memory resources are limited, the producer's model checker can reduce its memory requirements by caching only a subset of the model-checking-search results. Caching increases the likelihood that an SCC verification task runs to completion and produces a search script that represents the program's entire state space. The downside of caching is that it can result in an increase in search time. We introduce cost-based caching, that achieves an exhaustive search faster than existing caching techniques.
Finally, for cases when an exhaustive search is not possible, we present a novel method for estimating the state-space coverage of a partial model checking run. The coverage estimation can help the certifier to determine whether the partial model-checking results are adequate for certification