8 research outputs found

    Designing Robust Software Systems through Parametric Markov Chain Synthesis

    Get PDF
    We present a method for the synthesis of software system designs that satisfy strict quality requirements, are Pareto-optimal with respect to a set of quality optimisation criteria, and are robust to variations in the system parameters. To this end, we model the design space of the system under development as a parametric continuous-time Markov chain (pCTMC) with discrete and continuous parameters that correspond to alternative system architectures and to the ranges of possible values for configuration parameters, respectively. Given this pCTMC and required tolerance levels for the configuration parameters, our method produces a sensitivity-aware Pareto-optimal set of designs, which allows the modeller to inspect the ranges of quality attributes induced by these tolerances, thus enabling the effective selection of robust designs. Through application to two systems from different domains, we demonstrate the ability of our method to synthesise robust designs with a wide spectrum of useful tradeoffs between quality attributes and sensitivity

    Search-based crash reproduction using behavioural model seeding

    Get PDF
    Search-based crash reproduction approaches assist developers during debugging by generating a test case which reproduces a crash given its stack trace. One of the fundamental steps of this approach is creating objects needed to trigger the crash. One way to overcome this limitation is seeding: using information about the application during the search process. With seeding, the existing usages of classes can be used in the search process to produce realistic sequences of method calls which create the required objects. In this study, we introduce behavioral model seeding: a new seeding method which learns class usages from both the system under test and existing test cases. Learned usages are then synthesized in a behavioral model (state machine). Then, this model serves to guide the evolutionary process. To assess behavioral model-seeding, we evaluate it against test-seeding (the state-of-the-art technique for seeding realistic objects) and no-seeding (without seeding any class usage). For this evaluation, we use a benchmark of 124 hard-to-reproduce crashes stemming from six open-source projects. Our results indicate that behavioral model-seeding outperforms both test seeding and no-seeding by a minimum of 6% without any notable negative impact on efficiency

    A survey on software testability

    Full text link
    Context: Software testability is the degree to which a software system or a unit under test supports its own testing. To predict and improve software testability, a large number of techniques and metrics have been proposed by both practitioners and researchers in the last several decades. Reviewing and getting an overview of the entire state-of-the-art and state-of-the-practice in this area is often challenging for a practitioner or a new researcher. Objective: Our objective is to summarize the body of knowledge in this area and to benefit the readers (both practitioners and researchers) in preparing, measuring and improving software testability. Method: To address the above need, the authors conducted a survey in the form of a systematic literature mapping (classification) to find out what we as a community know about this topic. After compiling an initial pool of 303 papers, and applying a set of inclusion/exclusion criteria, our final pool included 208 papers. Results: The area of software testability has been comprehensively studied by researchers and practitioners. Approaches for measurement of testability and improvement of testability are the most-frequently addressed in the papers. The two most often mentioned factors affecting testability are observability and controllability. Common ways to improve testability are testability transformation, improving observability, adding assertions, and improving controllability. Conclusion: This paper serves for both researchers and practitioners as an "index" to the vast body of knowledge in the area of testability. The results could help practitioners measure and improve software testability in their projects

    Comparing GUI Functional System Testing with Functional System Logic Testing - An Experiment

    Get PDF
    The practitioner interested in reducing software verification effort may found herself lost in the many alternative definitions of Graphical User Interface (GUI) testing that exist and their relation to the notion of system testing. One result of these many definitions is that one may end up testi ng twice the same parts of the Software Under Test (SUT), specifically the application logic code. To clarify two important testing activities for the avoidance of duplicate testing effort, this paper studies possible differences between GUI testing and system testing experimentally. Specifically, we selected a SUT equipped with system tests that directly exercise the application code; We used GUITAR, a well-known GUI testing software to GUI test this SUT. Experimental results show important differences between system testing and GUI testing in terms of structural coverage and test cost

    Reverse engineering of GUI models

    Get PDF
    Tese de mestrado integrado. Engenharia Inform谩tica e Computa莽茫o. Faculdade de Engenharia. Universidade do Porto. 200

    A fault-location technique for Java implementations of algebraic specifications

    Get PDF
    Reviewed by Ant贸nia LopesExecuting comprehensive test suits allows programmers to strengthen the confidence on their software systems. However, given some failed test cases, finding the faults' locations is one of the most expensive and time consuming tasks, thereby any technique that makes it easier for the programmer to locate the faulty components is highly desirable. In this paper we focus on finding faults in object-oriented, more precisely Java, implementations of data types that are described by algebraic specifications. We capitalize on the ConGu and GenT approaches, namely on the models for the specification under study and the corresponding generated JUnit test suits that cover all axioms of the specification, and present a collection of techniques and underlying methodology, that give the programmer a means to find the location of a fault that causes the implementation to violate the specification. We propose Flasji, a stepwise process for finding the faulty method, which is transparent to the programmer, that applies the proposed techniques to find a collection of initial suspect candidates and to subsequently decide the prime suspect among them. We carried out an experiment to evaluate Flasji and obtained very encouraging results
    corecore