60,140 research outputs found

    Adaptive random testing by exclusion through test profile

    Get PDF
    One major objective of software testing is to reveal software failures such that program bugs can be removed. Random testing is a basic and simple software testing technique, but its failure-detection effectiveness is often controversial. Based on the common observation that program inputs causing software failures tend to cluster into contiguous regions, some researchers have proposed that an even spread of test cases should enhance the failure-detection effectiveness of random testing. Adaptive random testing refers to a family of algorithms to evenly spread random test cases based on various notions. Restricted random testing, an algorithm to implement adaptive random testing by the notion of exclusion, defines an exclusion region around each previously executed test case, and selects test cases only from outside all exclusion regions. Although having a high failure-detection effectiveness, restricted random testing has a very high computation overhead, and it rigidly discards all test cases inside any exclusion region, some of which may reveal software failures. In this paper, we propose a new method to implement adaptive random testing by exclusion, where test cases are simply selected based on a well-designed test profile. The new method has a low computation overhead and it does not omit any possible program inputs that can detect failures. Our experimental results show that the new method not only spreads test cases more evenly but also brings a higher failure-detection effectiveness than random testing

    The Tiling Algorithm for the 6dF Galaxy Survey

    Full text link
    The Six Degree Field Galaxy Survey (6dFGS) is a spectroscopic survey of the southern sky, which aims to provide positions and velocities of galaxies in the nearby Universe. We present here the adaptive tiling algorithm developed to place 6dFGS fields on the sky, and allocate targets to those fields. Optimal solutions to survey field placement are generally extremely difficult to find, especially in this era of large-scale galaxy surveys, as the space of available solutions is vast (2N dimensional) and false optimal solutions abound. The 6dFGS algorithm utilises the Metropolis (simulated annealing) method to overcome this problem. By design the algorithm gives uniform completeness independent of local density, so as to result in a highly complete and uniform observed sample. The adaptive tiling achieves a sampling rate of approximately 95%, a variation in the sampling uniformity of less than 5%, and an efficiency in terms of used fibres per field of greater than 90%. We have tested whether the tiling algorithm systematically biases the large-scale structure in the survey by studying the two-point correlation function of mock 6dF volumes. Our analysis shows that the constraints on fibre proximity with 6dF lead to under-estimating galaxy clustering on small scales (< 1 Mpc) by up to ~20%, but that the tiling introduces no significant sampling bias at larger scales.Comment: 11 pages, 7 figures. Full resolution version of the paper available from http://www.mso.anu.edu.au/6dFGS/ . Abridged version of abstract belo

    Application of a failure driven test profile in random testing

    Get PDF
    Random testing techniques have been extensively used in reliability assessment, as well as in debug testing. When used to assess software reliability, random testing selects test cases based on an operational profile; while in the context of debug testing, random testing often uses a uniform distribution. However, generally neither an operational profile nor a uniform distribution is chosen from the perspective of maximizing the effectiveness of failure detection. Adaptive random testing has been proposed to enhance the failure detection capability of random testing by evenly spreading test cases over the whole input domain. In this paper, we propose a new test profile, which is different from both the uniform distribution, and operational profiles. The aim of the new test profile is to maximize the effectiveness of failure detection. We integrate this new test profile with some existing adaptive random testing algorithms, and develop a family of new random testing algorithms. These new algorithms not only distribute test cases more evenly, but also have better failure detection capabilities than the corresponding original adaptive random testing algorithms. As a consequence, they perform better than the pure random testing

    A survey on adaptive random testing

    Get PDF
    Random testing (RT) is a well-studied testing method that has been widely applied to the testing of many applications, including embedded software systems, SQL database systems, and Android applications. Adaptive random testing (ART) aims to enhance RT's failure-detection ability by more evenly spreading the test cases over the input domain. Since its introduction in 2001, there have been many contributions to the development of ART, including various approaches, implementations, assessment and evaluation methods, and applications. This paper provides a comprehensive survey on ART, classifying techniques, summarizing application areas, and analyzing experimental evaluations. This paper also addresses some misconceptions about ART, and identifies open research challenges to be further investigated in the future work

    Development and Validation of a Rule-based Time Series Complexity Scoring Technique to Support Design of Adaptive Forecasting DSS

    Get PDF
    Evidence from forecasting research gives reason to believe that understanding time series complexity can enable design of adaptive forecasting decision support systems (FDSSs) to positively support forecasting behaviors and accuracy of outcomes. Yet, such FDSS design capabilities have not been formally explored because there exists no systematic approach to identifying series complexity. This study describes the development and validation of a rule-based complexity scoring technique (CST) that generates a complexity score for time series using 12 rules that rely on 14 features of series. The rule-based schema was developed on 74 series and validated on 52 holdback series using well-accepted forecasting methods as benchmarks. A supporting experimental validation was conducted with 14 participants who generated 336 structured judgmental forecasts for sets of series classified as simple or complex by the CST. Benchmark comparisons validated the CST by confirming, as hypothesized, that forecasting accuracy was lower for series scored by the technique as complex when compared to the accuracy of those scored as simple. The study concludes with a comprehensive framework for design of FDSS that can integrate the CST to adaptively support forecasters under varied conditions of series complexity. The framework is founded on the concepts of restrictiveness and guidance and offers specific recommendations on how these elements can be built in FDSS to support complexity
    • …
    corecore