419 research outputs found
Implementation and testing of a blackbox and a whitebox fuzzer for file compression routines
Fuzz testing is a software testing technique that has risen to prominence over the past two decades. The unifying feature of all fuzz testers (fuzzers) is their ability to somehow automatically produce random test cases for software. Fuzzers can generally be placed in one of two classes: black-box or white-box. Blackbox fuzzers do not derive information from a program\u27s source or binary in order to restrict the domain of their generated input while white-box fuzzers do. A tradeoff involved in the choice between blackbox and whitebox fuzzing is the rate at which inputs can be produced; since blackbox fuzzers need not do any thinking about the software under test to generate inputs, blackbox fuzzers can generate more inputs per unit time if all other factors are equal. The question of how blackbox and whitebox fuzzing should be used together for ideal economy of software testing has been posed and even speculated about, however, to my knowledge, no publically available study with the intent of characterizing an answer exists. The purpose of this thesis is to provide an initial exploration of the bug-finding characteristics of blackbox and whitebox fuzzers. A blackbox fuzzer is implemented and extended with a concolic execution program to make it whitebox. Both versions of the fuzzer are then used to run tests on some small programs and some parts of a file compression library
Mimicking Production Behavior with Generated Mocks
Mocking in the context of automated software tests allows testing program
units in isolation. Designing realistic interactions between a unit and its
environment, and understanding the expected impact of these interactions on the
behavior of the unit, are two key challenges that software testers face when
developing tests with mocks. In this paper, we propose to monitor an
application in production to generate tests that mimic realistic execution
scenarios through mocks. Our approach operates in three phases. First, we
instrument a set of target methods for which we want to generate tests, as well
as the methods that they invoke, which we refer to mockable method calls.
Second, in production, we collect data about the context in which target
methods are invoked, as well as the parameters and the returned value for each
mockable method call. Third, offline, we analyze the production data to
generate test cases with realistic inputs and mock interactions. The approach
is automated and implemented in an open-source tool called RICK. We evaluate
our approach with three real-world, open-source Java applications. RICK
monitors the invocation of 128 methods in production across the three
applications and captures their behavior. Next, RICK analyzes the production
observations in order to generate test cases that include rich initial states
and test inputs, mocks and stubs that recreate actual interactions between the
method and its environment, as well as mock-based oracles. All the test cases
are executable, and 52.4% of them successfully mimic the complete execution
context of the target methods observed in production. We interview 5 developers
from the industry who confirm the relevance of using production observations to
design mocks and stubs
Constraint-based generation of database states for testing database applications
Testing is essential for quality assurance of database applications. To test the quality of database applications, it usually requires test inputs consisting of both program input values and corresponding database states. However, producing these tests could be very tedious and labor-intensive in a non-automated way. It is thus imperative to conduct automatic test generation helping reduce human efforts.
The research focuses on automatic test generation of both program input values and corresponding database states for testing database applications. We develop our approaches based on the Dynamic Symbolic Execution (DSE) technique to achieve various testing requirements. We formalize a problem for program-input-generation given an existing database state to achieve high program code coverage and propose an approach that conducts program-input-generation through auxiliary query construction based on the intermediate information accumulated during DSE's exploration. We develop a technique to generate database states to achieve advanced code coverage criteria such as Boundary Value Coverage and Logical Coverage. We develop an approach that constructs synthesized database interactions to guide the DSE's exploration to collect constraints for both program inputs and associated database states. In this way, we bridge various constraints within a database application: query-construction constraints, query constraints, database schema constraints, and query-result-manipulation constraints. We develop an approach that generates tests for mutation testing on database applications. We use a state-of-the-art white-box testing tool called Pex for .NET from Microsoft Research as the DSE engine. Empirical evaluation results show that our approaches are able to generate effective program input values and sufficient database states to achieve various testing requirements
Study of the effects of SEU-induced faults on a pipeline protected microprocessor
This paper presents a detailed analysis of the behavior of a novel fault-tolerant 32-bit embedded CPU as compared to a
default (non-fault-tolerant) implementation of the same processor during a fault injection campaign of single and double faults. The
fault-tolerant processor tested is characterized by per-cycle voting of microarchitectural and the flop-based architectural states,
redundancy at the pipeline level, and a distributed voting scheme. Its fault-tolerant behavior is characterized for three different
workloads from the automotive application domain. The study proposes statistical methods for both the single and dual fault injection
campaigns and demonstrates the fault-tolerant capability of both processors in terms of fault latencies, the probability of fault
manifestation, and the behavior of latent faults
The Influence of the Ready Intelligence Program on Crewmembers\u27 Perception of Proficiency in an Air Force Weapon System
A lack of evaluation and evidence of effectiveness prompted this study of the Distributed Common Ground System\u27s (DCGS) proficiency maintenance tool, Ready Intelligence Program (RIP). The goal was to close the gap between research and practice and inform stakeholders at the local Distributed Ground Station (DGS) of evaluation results. Guided by a logic model as the theoretical foundation, this study examined how proficiency is perceived by DCGS crewmembers because of RIP at a military installation with intelligence, surveillance, and reconnaissance missions. This qualitative study used an outcomes-based program evaluation report based on interviews with 5 crewmembers, observations of program participant activities, and reviews of training documents and program reports. Data were transcribed into NVivo 10 for organization, and inductive code words and categories were applied. Data interpretations were confirmed via triangulation and then sent to the participants for member-checking. An external evaluator reviewed the study\u27s methodology, data, and findings for veracity. The project that resulted from the study was a program evaluation report that identified 4 overarching themes. It was concluded that (a) there was a lack of awareness of RIP, (b) RIP had minimal impact on perception of proficiency, (c) the program was occasionally applied ineffectively, and (d) management of the program was insufficient. It is recommended that existing RIP training be emphasized to crewmembers to increase awareness. Additionally, an ongoing program evaluation is recommended with a quantitative measure of proficiency achievement. This study promotes social change by improving attitudes toward positional proficiency and RIP as a maintenance tool, improving program maintenance, and facilitating regular program evaluations
NASA/ASEE Summer Faculty Fellowship Program
The contractor's report contains all sixteen final reports prepared by the participants in the 1989 Summer Faculty Fellowship Program. Reports describe research projects on a number of different topics. Interface software, metal corrosion, rocket triggering lightning, automatic drawing, 60-Hertz power, carotid-cardiac baroreflex, acoustic fields, robotics, AI, CAD/CAE, cryogenics, titanium, and flow measurement are discussed
Recommended from our members
Combining Static and Dynamic Analysis for Bug Detection and Program Understanding
This work proposes new combinations of static and dynamic analysis for bug detection and program understanding. There are 3 related but largely independent directions: a) In the area of dynamic invariant inference, we improve the consistency of dynamically discovered invariants by taking into account second-order constraints that encode knowledge aboutinvariants; the second-order constraints are either supplied by the programmer or vetted by the programmer (among candidate constraints suggested automatically); b) In the area of testing dataflow (esp. map-reduce) programs, our tool, SEDGE, achieves higher testing coverage by leveraging existinginput data and generalizing them using a symbolic reasoning engine (a powerful SMT solver); c) In the area of bug detection, we identify and present the concept of residual investigation: a dynamic analysis that serves as theruntime agent of a static analysis. Residual investigation identifies with higher certainty whether an error reported by the static analysis is likely true
Automatic Test Data Generation Using Constraint Programming and Search Based Software Engineering Techniques
RĂSUMĂ
Prouver qu'un logiciel correspond à sa spécification ou exposer des erreurs cachées dans son implémentation est une tùche de test trÚs difficile, fastidieuse et peut coûter plus de 50% de coût total du logiciel. Durant la phase de test du logiciel, la génération des données de test est l'une des tùches les plus coûteuses. Par conséquent, l'automatisation de cette tùche permet de réduire considérablement le coût du logiciel, le temps de développement et les délais de commercialisation.
Plusieurs travaux de recherche ont proposé des approches automatisées pour générer des données de test. Certains de ces travaux ont montré que les techniques de génération des données de test qui sont basées sur des métaheuristiques (SB-STDG) peuvent générer automatiquement des données de test. Cependant, ces techniques sont trÚs sensibles à leur orientation qui peut avoir un impact sur l'ensemble du processus de génération des données de test. Une insuffisance d'informations pertinentes sur le problÚme de génération des données de test peut affaiblir l'orientation et affecter négativement l'efficacité et l'effectivité de SB-STDG.
Dans cette thÚse, notre proposition de recherche est d'analyser statiquement le code source pour identifier et extraire des informations pertinentes afin de les exploiter dans le processus de SB-STDG pourrait offrir davantage d'orientation et ainsi d'améliorer l'efficacité et l'effectivité de SB-STDG.
Pour extraire des informations pertinentes pour l'orientation de SB-STDG, nous analysons de maniÚre statique la structure interne du code source en se concentrant sur six caractéristiques, i.e., les constantes, les instructions conditionnelles, les arguments, les membres de données, les méthodes et les relations.
En mettant l'accent sur ces caractéristiques et en utilisant différentes techniques existantes d'analyse statique, i.e, la programmation par contraintes (CP), la théorie du schéma et certains analyses statiques légÚres,
nous proposons quatre approches:
(1) en mettant l'accent sur les arguments et les instructions conditionnelles, nous définissons une approche hybride qui utilise les techniques de CP pour guider SB-STDG à réduire son espace de recherche;
(2) en mettant l'accent sur les instructions conditionnelles et en utilisant des techniques de CP, nous dĂ©finissons deux nouvelles mĂ©triques qui mesurent la difficultĂ© Ă satisfaire une branche (i.e., condition), d'oË nous tirons deux nouvelles fonctions objectif pour guider SB-STDG;
(3) en mettant l'accent sur les instructions conditionnelles et en utilisant la théorie du schéma, nous adaptons l'algorithme génétique pour mieux répondre au problÚme de la génération de données de test;
(4) en mettant l'accent sur les arguments, les instructions conditionnelles, les constantes, les membres de données, les méthodes et les relations, et en utilisant des analyses statiques légÚres, nous définissons un générateur d'instance qui génÚre des données de test candidates pertinentes
et une nouvelle représentation du problÚme de génération des données de test orienté-objet qui réduit implicitement l'espace de recherche de SB-STDG.
Nous montrons que les analyses statiques aident à améliorer l'efficacité et l'effectivité de SB-STDG. Les résultats obtenus dans cette thÚse montrent des améliorations importantes en termes d'efficacité et d'effectivité. Ils sont prometteurs et nous espérons que d'autres recherches dans le domaine de la génération des données de test pourraient améliorer davantage l'efficacité ou l'effectivité.----------ABSTRACT
Proving that some software system corresponds to its specification or revealing hidden errors in its implementation is a time consuming and tedious testing process, accounting for 50% of the total software. Test-data generation is one of the most expensive parts of the software testing phase. Therefore, automating this task can significantly reduce software cost, development time, and time to market.
Many researchers have proposed automated approaches to generate test data.
Among the proposed approaches, the literature showed that Search-Based Software Test-data Generation (SB-STDG) techniques can automatically generate test data.
However, these techniques are very sensitive to their guidance which impact the whole test-data generation process. The insufficiency of information relevant about the test-data generation problem can weaken the SB-STDG guidance and negatively affect its efficiency and effectiveness.
In this dissertation, our thesis is statically analyzing source code to identify and extract relevant information to exploit them in the SB-STDG process could offer more guidance and thus improve the efficiency and effectiveness of SB-STDG.
To extract information relevant for SB-STDG guidance, we statically analyze the internal structure of the source code focusing on six features, i.e., constants, conditional statements, arguments, data members, methods, and relationships.
Focusing on these features and using different existing techniques of static analysis, i.e., constraints programming (CP), schema theory, and some lightweight static analyses,
we propose four approaches:
(1) focusing on arguments and conditional statements, we define a hybrid approach that uses CP techniques to guide SB-STDG in reducing its search space;
(2) focusing on conditional statements and using CP techniques, we define two new metrics that measure the difficulty to satisfy a branch, hence we derive two new fitness functions to guide SB-STDG;
(3) focusing on conditional statements and using schema theory, we tailor genetic algorithm to better fit the problem of test-data generation;
(4) focusing on arguments, conditional statements, constants, data members, methods, and relationships, and using lightweight static analyses, we define an instance generator that generates relevant test-data candidates
and a new representation of the problem of object-oriented test-data generation that implicitly reduces the SB-STDG search space.
We show that using static analyses improve the SB-STDG efficiency and effectiveness. The achieved results in this dissertation show an important improvements in terms of effectiveness and efficiency. They are promising and we hope that further research in the field of test-data generation might improve efficiency or effectiveness
- âŠ