224 research outputs found
FSM quasi-equivalence testing via reduction and observing absence
There has been significant interest in automatically generating test cases from a non-deterministic finite state machine (FSM). Most approaches check that the behaviours of the system under test (SUT) are allowed by the specification FSM; they therefore test for reduction. However, sometimes one wants all of the behaviours, and so features, of the specification to be implemented and then one is testing for equivalence. In this paper we first note that in order to test for equivalence one must effectively be able to observe the SUT not being able to produce an output in response to an input after trace ; we model this as the absence of an output. We prove that the problem of testing for equivalence to FSM can be mapped to testing for reduction to an FSM that extends with absences. Thus, one can use techniques developed for testing for reduction when testing for equivalence. We then consider the case where the specification is partial, generalising the result to quasi-equivalence. These results are proved for observable specifications and so we also show how a partial FSM can be mapped to an observable partial FSM from which we can test
Overcoming controllability problems in distributed testing from an input output transition system
This is the Pre-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 Springer VerlagThis paper concerns the testing of a system with physically distributed interfaces, called ports, at which it interacts with its environment. We place a tester at each port and the tester at port p observes events at p only. This can lead to controllability problems, where the observations made by the tester at a port p are not sufficient for it to be able to know when to send an input. It is known that there are test objectives, such as executing a particular transition, that cannot be achieved if we restrict attention to test cases that have no controllability problems. This has led to interest in schemes where the testers at the individual ports send coordination messages to one another through an external communications network in order to overcome controllability problems. However, such approaches have largely been studied in the context of testing from a deterministic finite state machine. This paper investigates the use of coordination messages to overcome controllability problems when testing from an input output transition system and gives an algorithm for introducing sufficient messages. It also proves that the problem of minimising the number of coordination messages used is NP-hard
Amorphous procedure extraction
The procedure extraction problem is concerned with the meaning preserving formation of a procedure from a (not necessarily contiguous) selected set of statements. Previous approaches to the problem have used dependence analysis to identify the non-selected statements which must be 'promoted' (also selected) in order to preserve semantics. All previous approaches to the problem have been syntax preserving. This work shows that by allowing transformation of the program's syntax it is possible to extract both procedures and functions in an amorphous manner. That is, although the amorphous extraction process is meaning preserving it is not necessarily syntax preserving. The amorphous approach is advantageous in a variety of situations. These include when it is desirable to avoid promotion, when a value-returning function is to be extracted from a scattered set of assignments to a variable, and when side effects are present in the program from which the procedure is to be extracted
Using schedulers to test probabilistic distributed systems
This is the author's accepted manuscript. The final publication is available at Springer via http://dx.doi.org/10.1007/s00165-012-0244-5. Copyright © 2012, British Computer Society.Formal methods are one of the most important approaches to increasing the confidence in the correctness of software systems. A formal specification can be used as an oracle in testing since one can determine whether an observed behaviour is allowed by the specification. This is an important feature of formal testing: behaviours of the system observed in testing are compared with the specification and ideally this comparison is automated. In this paper we study a formal testing framework to deal with systems that interact with their environment at physically distributed interfaces, called ports, and where choices between different possibilities are probabilistically quantified. Building on previous work, we introduce two families of schedulers to resolve nondeterministic choices among different actions of the system. The first type of schedulers, which we call global schedulers, resolves nondeterministic choices by representing the environment as a single global scheduler. The second type, which we call localised schedulers, models the environment as a set of schedulers with there being one scheduler for each port. We formally define the application of schedulers to systems and provide and study different implementation relations in this setting
Amorphous procedure extraction
The procedure extraction problem is concerned with the meaning preserving formation of a procedure from a (not necessarily contiguous) selected set of statements. Previous approaches to the problem have used dependence analysis to identify the non-selected statements which must be 'promoted' (also selected) in order to preserve semantics. All previous approaches to the problem have been syntax preserving. This work shows that by allowing transformation of the program's syntax it is possible to extract both procedures and functions in an amorphous manner. That is, although the amorphous extraction process is meaning preserving it is not necessarily syntax preserving. The amorphous approach is advantageous in a variety of situations. These include when it is desirable to avoid promotion, when a value-returning function is to be extracted from a scattered set of assignments to a variable, and when side effects are present in the program from which the procedure is to be extracted
The effectiveness of refactoring, based on a compatibility testing taxonomy and a dependency graph
In this paper, we describe and then appraise a testing taxonomy proposed by van Deursen and Moonen (VD&M) based on the post-refactoring repeatability of tests. Four categories of refactoring are identified by VD&M ranging from semantic-preserving to incompatible, where, for the former, no new tests are required and for the latter, a completely new test set has to be developed. In our appraisal of the taxonomy, we heavily stress the need for the inter-dependence of the refactoring categories to be considered when making refactoring decisions and we base that need on a refactoring dependency graph developed as part of the research. We demonstrate that while incompatible refactorings may be harmful and time-consuming from a testing perspective, semantic-preserving refactorings can have equally unpleasant hidden ramifications despite their advantages. In fact, refactorings which fall into neither category have the most interesting properties. We support our results with empirical refactoring data drawn from seven Java open-source systems (OSS) and from the same analysis form a tentative categorization of code smells
Conformance relations for distributed testing based on CSP
Copyright @ 2011 Springer Berlin HeidelbergCSP is a well established process algebra that provides comprehensive theoretical and practical support for refinement-based design and verification of systems. Recently, a testing theory for CSP has also been presented. In this paper, we explore the problem of testing from a CSP specification when observations are made by a set of distributed testers. We build on previous work on input-output transition systems, but the use of CSP leads to significant differences, since some of its conformance (refinement) relations consider failures as well as traces. In addition, we allow events to be observed by more than one tester. We show how the CSP notions of refinement can be adapted to distributed testing. We consider two contexts: when the testers are entirely independent and when they can cooperate. Finally, we give some preliminary results on test-case generation and the use of coordination messages. © 2011 IFIP International Federation for Information Processing
Using Squeeziness to test component-based systems defined as Finite State Machines
Context: Testing is the main validation technique used to increase the reliability of software systems. The effectiveness of testing can be strongly reduced by Failed Error Propagation. This situation happens when the System Under Test executes a faulty statement, the state of the system is affected by this fault, but the expected output is observed. Squeeziness is an information theoretic measure designed to quantify the likelihood of Failed Error Propagation and previous work has shown that Squeeziness correlates strongly with Failed Error Propagation in white-box scenarios. Despite its usefulness, this measure, in its current formulation, cannot be used in a black-box scenario where we do not have access to the source code of the components. Objective: The main goal of this paper is to adapt Squeeziness to a black-box scenario and evaluate whether it can be used to estimate the likelihood that a component of a software system introduces Failed Error Propagation. Method: First, we defined our black-box scenario. Specifically, we considered the Failed Error Propagation that a component introduces when it receives its input from another component. We were interested in this since such fault masking makes it more difficult to find faults in the previous component when testing. Second, we defined our notion of Squeeziness in this framework. Finally, we carried out experiments in order to evaluate our measure. Results: Our experiments showed a strong correlation between the likelihood of Failed Error Propagation and Squeeziness. Conclusion: We can conclude that our new notion of Squeeziness can be used as a measure that estimates the probability of Failed Error Propagation being introduced by a component. As a result, it has the potential to be used as a measure of testability, allowing testers to assess how easy it is to test either the whole system or a single component. We considered a simple model (Finite State Machines) but the notions and results can be extended/adapted to deal with more complex state-based models, in particular, those containing data
- …