132 research outputs found

Recommended from our members

### Implementation relations for testing through asynchronous channels

This paper concerns testing from an input output transition system (IOTS) model of a system under test that interacts with its environment through asynchronous first in first out (FIFO) channels. It explores methods for analysing an IOTS without modelling the channels. If IOTS M produces sequence $\sigma$ then, since communications are asynchronous, output can be delayed and so a different sequence might be observed. Thus M defines a language Tr(M) of sequences that can be observed when interacting with M through FIFO channels. We define implementation relations and equivalences in terms of Tr(M): an implementation relation says how IOTS N must relate to IOTS M in order for N to be a correct implementation of M. It is important to use an appropriate implementation relation since otherwise the verdict from a test run might be incorrect and because it influences test generation. It is undecidable whether IOTS N conforms to IOTS M and so also whether there is a test case that can distinguish between two IOTSs. We also investigate the situation in which we have a finite automaton P and either wish to know whether $Tr(M) \cap L(P)$ is empty or whether Tr(M) \cap \tr(P) is empty and prove that these are undecidable. In addition, we give conditions under which conformance and intersection are decidable.This work was partially supported by EPSRC grant EP/G04354X/1:The Birth, Life and Death of Semantic Mutants

### Avoiding coincidental correctness in boundary value analysis

In partition analysis we divide the input domain to form subdomains on which the system's behaviour should be uniform. Boundary value analysis produces test inputs near each subdomain's boundaries to find failures caused by incorrect implementation of the boundaries. However, boundary value analysis can be adversely affected by coincidental correctness---the system produces the expected output, but for the wrong reason. This article shows how boundary value analysis can be adapted in order to reduce the likelihood of coincidental correctness. The main contribution is to cases of automated test data generation in which we cannot rely on the expertise of a tester

Recommended from our members

### Verdict functions in testing with a fault domain or test hypotheses

In state based testing it is common to include verdicts within test cases, the result of the test case being the verdict reached by the test run. In addition, approaches that reason about test effectiveness or produce tests that are guaranteed to find certain classes of faults are often based on either a fault domain or a set of test hypotheses. This paper considers how the presence of a fault domain or test hypotheses affects our notion of a test verdict. The analysis reveals the need for new verdicts that provide more information than the current verdicts and for verdict functions that return a verdict based on a set of test runs rather than a single test run. The concepts are illustrated in the contexts of testing from a non-deterministic finite state machine and the testing of a datatype specified using an algebraic specification language but are potentially relevant whenever fault domains or test hypotheses are used

Recommended from our members

### Using a minimal number of resets when testing from a finite state machine

Recommended from our members

### Applying adaptive test cases to nondeterministic implementations

The testing of a state-based system involves the application of sequences of
inputs and the observation of the resultant input/output sequences (traces).
These traces can result from preset input sequences or adaptive test cases in
which the choice of the next input depends on the trace that has observed
up to that input. Adaptive test cases are used in a number of areas including
protocol conformance testing and adaptivity forms
the basis of the standardised test language TTCN.
Suppose that we apply adaptive test case Ā° to the system under test (SUT)
and observe the trace ĀÆĀ¾. If the SUT is deterministic and we apply Ā° again, after
resetting the SUT, then we will observe ĀÆĀ¾ again. Further, if we have another
adaptive test case Ā°0 where a prefix ĀÆĀ¾0 of ĀÆĀ¾ is a possible response to Ā°0 then we
know that the application of Ā°0 must lead to ĀÆĀ¾0. Thus, for a deterministic SUT
the response of the SUT to an adaptive test case Ā°0 might be deduced from
the response of the SUT to another adaptive test case. This observation
can be used to reduce the cost of testing: we only apply adaptive test case Ā°0
if we cannot deduce the response to Ā°0 from the set of observations.
While many systems are deterministic, nondeterminism is becoming increasingly
common. Nondeterminism in the SUT is typically a consequence of limits
in the ability to observe the SUT. For example, it could be a result of information
hiding, real time properties, or of different possible interleavings in a
concurrent system (see, for example. This paper investigates the case
where the SUT is nondeterministic. We consider the situation in which a set
O of traces has been observed in testing and we are considering applying an adaptive test case Ā°. In general we cannot expect to be able to deduce the
response of a nondeterministic SUT to an adaptive test case Ā° since there may
be more than one possible response. Instead we consider the question of how
we can decide whether the application of Ā° could lead to a trace that has not
been observed. A solution to this would allow us to reduce the cost of testing:
if all possible responses of the SUT to Ā° have already been observed then we
do not have to apply Ā° in testing and thus reduce the cost of test execution.
This paper considers three cases. Section 3 considers the case where we can
apply a fairness assumption. Section 4 weakens this assumption to us having
a lower bound p on the probability of observing alternative responses of the
SUT to any input and in any state. Section 5 then considers the general case

### Testing a distributed system: Generating minimal synchronised test sequences that detect output-shifting faults

A distributed system may have a number of separate interfaces called ports and in testing it may be necessary to have a separate tester at each port. This introduces a number of issues, including the necessity to use synchronised test sequences and the possibility that output-shifting faults go undetected. This paper considers the problem of generating a minimal synchronised test sequence that detects output-shifting faults when the system is specified using a finite state machine with multiple ports. The set of synchronised test sequences that detect output-shifting faults is represented by a directed graph G and test generation involves finding appropriate tours of G. This approach is illustrated using the test criterion that the test sequence contains a test segment for each transition

Recommended from our members

### Testing in the distributed test architecture: An extended abstract

Some systems interact with their environment at a number of physically distributed interfaces/ports and when testing such a system it is normal to place a local tester at each port. If the local testers cannot interact with one another and there is no global clock then we are testing in the distributed test architecture and this can introduce additional controllability and observability problems. While there has been interest in test generation algorithms that overcome controllability and observability problems, such algorithms lack generality since controllability and observability problems cannot always be overcome. In addition, traditionally only deterministic systems and models have been considered despite distributed systems often being non-deterministic. This paper describes recent work that characterized the power of testing in the distributed test architecture in the context of testing from a deterministic finite state machine and also work that investigated testing from a non-deterministic finite state machine and testing from an input output transition system. This work has the potential to lead to more general test generation algorithms for the distributed test architecture

### Adaptive testing of a deterministic implementation against a nondeterministic finite state machine

A number of authors have looked at the problem of deriving a checking experiment from a nondeterministic finite state machine that models the required behaviour of a system. We show that these methods can be extended if it is known that the implementation is equivalent to some (unknown) deterministic finite state machine. When testing a deterministic implementation, the test output provides information about the implementation under test and can thus guide future testing. The use of an adaptive test process is thus proposed

- ā¦