9 research outputs found

    Automatically Finding the Control Variables for Complex System Behavior

    Get PDF
    Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the factors most likely to cause a mission-critical failure. The goal of this research is to comparatively assess treatment learning against state-of-the-art numerical optimization techniques. To achieve this, this paper benchmarks the TAR3 and TAR4.1 treatment learners against optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. The results clearly show that treatment learning is both faster and more accurate than traditional optimization methods

    The robust optimization of non-linear requirements models

    Get PDF
    Solutions to non-linear requirements engineering problems may be brittle ; i.e. small changes may dramatically alter solution effectiveness. Hence, it is not enough to just generate solutions to requirements problems---we must also assess solution robustness. This thesis aims to address two concerns: (a) Is demonstrating robustness a time consuming task? and (b) Is it necessary that solution quality be traded off against solution robustness?;Using a Bayesian ranking heuristic, the KEYS2 algorithm fixes a small number of important variables, rapidly pushing the search into a stable, optimal plateau. By design, KEYS2 generates decision ordering diagrams (in time experimentally shown to be O(N2)). Once generated, these diagrams can confirm solution robustness in linear time. When assessed in terms of reducing inference times, increasing solution quality, and decreasing the variance of the generated solution, KEYS2 out-performs other search algorithms (simulated annealing, A*, MaxWalkSat)

    Clustering Methods for Requirements Selection and Optimisation

    Get PDF
    Decisions about which features to include in a new system or the next release of an existing one are critical to the success of software products. Such decisions should be informed by the needs of the users and stakeholders. But how can we make such decisions when the number of potential features and the number of individual stakeholders are very large? This problem is particularly important when stakeholders’ needs are gathered online through the use of discussion forums and web-based feature request management systems. Existing requirements decision-making techniques are not adequate in this context because they do not scale well to such large numbers of feature requests or stakeholders. This thesis addresses this problem by presenting and evaluating clustering methods to facilitate requirements selection and optimization when requirements preferences are elicited from a very large number of stakeholders. Firstly, it presents a novel method for identifying groups of stakeholders with similar preferences for requirements. It computes the representative preferences for the resulting groups and provides additional insights in trends and divergences in stakeholders’ preferences which may be used to aid the decision making process. Secondly, it presents a method to help decision-makers identify key similarities and differences among large sets of optimal design decisions. The benefits of these techniques are demonstrated on two real-life projects - one concerned with selecting features for mobile phones and the other concerned with selecting requirements for a rights and access management system

    Empirical Studies on Automated Software Testing Practices

    Get PDF
    Software testing is notoriously difficult and expensive, and improper testing carries economic, legal, and even environmental or medical risks. Research in software testing is critical to enabling the development of the robust software that our society relies upon. This dissertation aims to lower the cost of software testing without decreasing the quality by focusing on the use of automation. The dissertation consists of three empirical studies on aspects of software testing. Specifically, these three projects focus on (1) mapping the connections between research topics and the evolution of research topics in the field of software testing, (2) an assessment of the metrics used to guide automated test generation and the factors that suggest when automated test generation can detect real faults, and (3) examination of the semantic coupling between synthetic and real faults in service of improving our ability to cost-effectively generate synthetic faults for use in assessing test case quality. • Project 1 (Mapping): Our main goal for this project is to understand better the emergence of individual research topics and the connection between these topics within the broad field of software testing, enabling the identification of new topics and connections in future research. To achieve this goal, we have applied co-word analysis in order to characterize the topology of software testing research over three decades of research studies based on the keywords provided by the authors of studies indexed in the Scopus database. • Project 2 (Automated Input Generation): We have assessed the fault-detection capabilities of unit test suites generated by automated tools with the goal of satisfying eight fitness functions representing common testing goals. Our purpose was not only to identify the particular fitness functions that detect the most faults but to explore further the factors that influence fault detection. To do this, we gathered observations on the generated test suites and metrics describing the source code of the faulty classes and applied a rule-learning algorithm to identify the factors with the strongest influence on fault detection. • Project 3 (Mutant-Fault Coupling): Synthetic faults (mutants), which can be inserted into code through transformative mutation operators, offer an automated means to assess the effectiveness of test suites and create new test cases. However, mutants can be expensive to utilize and may not realistically model real faults. To enable the cost-effective generation of mutants, we investigate this semantic relationship between mutation operators and real faults

    Finding Robust Solutions in Requirements Models

    No full text
    Associated research group: Critical Systems Research GroupSolutions to non-linear requirements engineering problems may be "brittle"; i.e. small changes may dramatically alter solution effectiveness. Hence, it is not enough to just generate solutions to requirements problems- we must also assess solution robustness. The KEYS2 algorithm can generate decision ordering diagrams. Once generated, these diagrams can assess solution robustness in linear time. In experiments with real-world requirements engineering models, we show that KEYS2 can generate decision ordering diagrams in O(N 2). When assessed in terms of terms of (a) reducing inference times, (b) increasing solution quality, and (c) decreasing the variance of the generated solution, KEYS2 out-performs other search algorithms (simulated annealing, ASTAR, MaxWalkSat)
    corecore