149,994 research outputs found

    Adopting genetic algorithm to enhance state-sensitivity partitioning

    Get PDF
    Software testing requires executing software under test with the intention of finding defects as much as possible. Test case generation remains the most dominant research in software testing.The technique used in generating test cases may lead to effective and efficient software testing process.Many techniques have been proposed to generate test cases.One of them is State Sensitivity Partitioning (SSP) technique.The objective of SSP is to avoid exhaustive testing of the entire data states of a module. In SSP,test cases are represented in the form of sequence of events. Even recognizing the finite limits on the size of the queue, there is an infinite set of these sequences and with no upper bound on the length of such a sequence.Thus, a lengthy test sequence might consist of redundant data states. The existence of the redundant data state will increase the size of test suite and consequently the process of testing will be ineffective. Therefore, there is a need to optimize those test cases generated by the SSP in enhancing its effectiveness in detecting faults. Genetic algorithm (GA) has been identified as the most common potential technique among several optimization techniques.Thus, GA is investigated for the integrating with the existing SSP. This paper addresses the issue on how to represent the states produced by SSP sequences of events in order to be accepted by GA.System ID were used for representing the combination of states variables uniquely and generate the GA initial population

    Design diversity: an update from research on reliability modelling

    Get PDF
    Diversity between redundant subsystems is, in various forms, a common design approach for improving system dependability. Its value in the case of software-based systems is still controversial. This paper gives an overview of reliability modelling work we carried out in recent projects on design diversity, presented in the context of previous knowledge and practice. These results provide additional insight for decisions in applying diversity and in assessing diverseredundant systems. A general observation is that, just as diversity is a very general design approach, the models of diversity can help conceptual understanding of a range of different situations. We summarise results in the general modelling of common-mode failure, in inference from observed failure data, and in decision-making for diversity in development.

    Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results

    Get PDF
    Testability is defined as the probability that a program will fail a test, conditional on the program containing some fault. In this paper, we show that statements about the testability of a program can be more simply described in terms of assumptions on the probability distribution of the failure intensity of the program. We can thus state general acceptance conditions in clear mathematical terms using Bayesian inference. We develop two scenarios, one for software for which the reliability requirements are that the software must be completely fault-free, and another for requirements stated as an upper bound on the acceptable failure probability

    Will My Tests Tell Me If I Break This Code?

    Get PDF
    Automated tests play an important role in software evolution because they can rapidly detect faults introduced during changes. In practice, code-coverage metrics are often used as criteria to evaluate the effectiveness of test suites with focus on regression faults. However, code coverage only expresses which portion of a system has been executed by tests, but not how effective the tests actually are in detecting regression faults. Our goal was to evaluate the validity of code coverage as a measure for test effectiveness. To do so, we conducted an empirical study in which we applied an extreme mutation testing approach to analyze the tests of open-source projects written in Java. We assessed the ratio of pseudo-tested methods (those tested in a way such that faults would not be detected) to all covered methods and judged their impact on the software project. The results show that the ratio of pseudo-tested methods is acceptable for unit tests but not for system tests (that execute large portions of the whole system). Therefore, we conclude that the coverage metric is only a valid effectiveness indicator for unit tests.Comment: 7 pages, 3 figure

    Optimal discrete stopping times for reliability growth tests

    Get PDF
    Often, the duration of a reliability growth development test is specified in advance and the decision to terminate or continue testing is conducted at discrete time intervals. These features are normally not captured by reliability growth models. This paper adapts a standard reliability growth model to determine the optimal time for which to plan to terminate testing. The underlying stochastic process is developed from an Order Statistic argument with Bayesian inference used to estimate the number of faults within the design and classical inference procedures used to assess the rate of fault detection. Inference procedures within this framework are explored where it is shown the Maximum Likelihood Estimators possess a small bias and converges to the Minimum Variance Unbiased Estimator after few tests for designs with moderate number of faults. It is shown that the Likelihood function can be bimodal when there is conflict between the observed rate of fault detection and the prior distribution describing the number of faults in the design. An illustrative example is provided

    Squeeziness: An information theoretic measure for avoiding fault masking

    Get PDF
    Copyright @ 2012 ElsevierFault masking can reduce the effectiveness of a test suite. We propose an information theoretic measure, Squeeziness, as the theoretical basis for avoiding fault masking. We begin by explaining fault masking and the relationship between collisions and fault masking. We then define Squeeziness and demonstrate by experiment that there is a strong correlation between Squeeziness and the likelihood of collisions. We conclude with comments on how Squeeziness could be the foundation for generating test suites that minimise the likelihood of fault masking
    corecore