22 research outputs found
A survey on software testability
Context: Software testability is the degree to which a software system or a
unit under test supports its own testing. To predict and improve software
testability, a large number of techniques and metrics have been proposed by
both practitioners and researchers in the last several decades. Reviewing and
getting an overview of the entire state-of-the-art and state-of-the-practice in
this area is often challenging for a practitioner or a new researcher.
Objective: Our objective is to summarize the body of knowledge in this area and
to benefit the readers (both practitioners and researchers) in preparing,
measuring and improving software testability. Method: To address the above
need, the authors conducted a survey in the form of a systematic literature
mapping (classification) to find out what we as a community know about this
topic. After compiling an initial pool of 303 papers, and applying a set of
inclusion/exclusion criteria, our final pool included 208 papers. Results: The
area of software testability has been comprehensively studied by researchers
and practitioners. Approaches for measurement of testability and improvement of
testability are the most-frequently addressed in the papers. The two most often
mentioned factors affecting testability are observability and controllability.
Common ways to improve testability are testability transformation, improving
observability, adding assertions, and improving controllability. Conclusion:
This paper serves for both researchers and practitioners as an "index" to the
vast body of knowledge in the area of testability. The results could help
practitioners measure and improve software testability in their projects
Black- and White-Box Self-testing COTS Components
Development of a software system from existing components can surely have various benefits, but can also entail a series of problems. One type of problems is caused by a limited exchange of information between the developer and user of a component, i.e. the developer of a componentbased system. A limited exchange of information cannot only require the testing by the user but it can also complicate
this tasks, since vital artifacts, source code in particular, might not be available. Self-testing components can be one response in such situation. This paper describes an enhancement of the Self-Testing COTS Components (STECC) Method so that an appropriately enabled component is not only capable of white-box testing its methods but also capable of black-box testing
Metamorphic Runtime Checking of Non-Testable Programs
Challenges arise in assuring the quality of applications that do not have test oracles, i.e., for which it is impossible to know what the correct output should be for arbitrary input. Metamorphic testing has been shown to be a simple yet effective technique in addressing the quality assurance of these "non-testable programs". In metamorphic testing, if test input x produces output f(x), specified "metamorphic properties" are used to create a transformation function t, which can be applied to the input to produce t(x); this transformation then allows the output f(t(x)) to be predicted based on the already-known value of f(x). If the output is not as expected, then a defect must exist. Previously we investigated the effectiveness of testing based on metamorphic properties of the entire application. Here, we improve upon that work by presenting a new technique called Metamorphic Runtime Checking, a testing approach that automatically conducts metamorphic testing of individual functions during the program's execution. We also describe an implementation framework called Columbus, and discuss the results of empirical studies that demonstrate that checking the metamorphic properties of individual functions increases the effectiveness of the approach in detecting defects, with minimal performance impact
State of the art in testing components
The use of components in development of complex software systems can surely have various benefits. Their testing, however, is still one of the open issues in software engineering. Both the developer of a component and the developer of a system using components often face the problem that information vital for certain development tasks is not available. Such a lack of information has various consequences
to both. One of the important consequences is that it might not only obligate the developer of a system to test the components used, it might also complicate these tests. This article gives an overview of component testing approaches that explicitly respect a lack of information in development
Recommended from our members
Using Metamorphic Testing at Runtime to Detect Defects in Applications without Test Oracles
First, we will present an approach called Automated Metamorphic System Testing. This will involve automating system-level metamorphic testing by treating the application as a black box and checking that the metamorphic properties of the entire application hold after execution. This will allow for metamorphic testing to be conducted in the production environment without affecting the user, and will not require the tester to have access to the source code. The tests do not require an oracle upon their creation; rather, the metamorphic properties act as built-in test oracles. We will also introduce an implementation framework called Amsterdam. Second, we will present a new type of testing called Metamorphic Runtime Checking. This involves the execution of metamorphic tests from within the application, i.e., the application launches its own tests, within its current context. The tests execute within the application's current state, and in particular check a function's metamorphic properties. We will also present a system called Columbus that supports the execution of the Metamorphic Runtime Checking from within the context of the running application. Like Amsterdam, it will conduct the tests with acceptable performance overhead, and will ensure that the execution of the tests does not affect the state of the original application process from the users' perspective; however, the implementation of Columbus will be more challenging in that it will require more sophisticated mechanisms for conducting the tests without pre-empting the rest of the application, and for comparing the results which may conceivably be in different processes or environments. Third, we will describe a set of metamorphic testing guidelines that can be followed to assist in the formulation and specification of metamorphic properties that can be used with the above approaches. These will categorize the different types of properties exhibited by many applications in the domain of machine learning and data mining in particular (as a result of the types of applications we will investigate), but we will demonstrate that they are also generalizable to other domains as well. This set of guidelines will also correlate to the different types of defects that we expect the approaches will be able to find
Merging components and testing tools: The Self-Testing COTS Components (STECC) Strategy
Development of a software system from existing components can surely have various benefits, but can also entail a series of problems. One type of problems is caused by a limited exchange of information between the developer and user of a component. A limited exchange and thereby a lack of information can have various consequences, among them the requirement to test a component prior to its integration into a software system. A lack of information cannot only make test prior to integration necessary, it can also complicate this tasks. This paper proposes a new strategy to testing components and making components testable. The basic idea of the strategy is to merge components and testing tools in order to make components capable of testing their own methods. Such components allow their thorough testing without disclosing detailed information, such as source code. This strategy thereby fulfills the needs of both the developer and user of a component