14 research outputs found

    HTCPM: A HYBRID TEST CASE PRIORITIZATION MODEL FOR WEB AND GUI APPLICATIONS

    Get PDF
    Web and Event-driven applications (EDS) is a class of applications that is quickly becoming ubiquitous. All EDS take sequences of events (e.g., messages, mouse-clicks) as input, change their state, and produce an output (e.g., events, system calls, text messages), where as in web, user session data gathered as users operate web applications can be considered as input, change their state, and produce an output. Examples include web applications, graphical user interfaces (GUIs), network protocols, device drivers, and embedded applications. Testing for functional correctness of EDS such as stand-alone GUI and web-based applications is critical to many organizations. These applications share several important characteristics. Both are particularly challenging to test because users can invoke many different sequences of events that affect application behavior. Hence here a novel model is provided to rank the test cases based on their prioritization

    Using a goal-driven approach to generate test cases for GUIs

    Full text link
    The widespread use of GUIs for interacting with soft-ware is leading to the construction of more and more complex GUIs. With the growing complexity comes challenges in testing the correctness of a GUI and the underlying software. We present a new technique to au-tomatically generate test cases for GUIs that exploits planning, a well developed and used technique in ar-tificial intelligence. Given a set of operators, an initial state and a goal state, a planner produces a sequence of the operators that will change the initial state to the goal state. Our test case generation technique first ana-lyzes a GUI and derives hierarchical planning operators from the actions in the GUI. The test designer deter-mines the preconditions and effects of the hierarchical operators, which are then input into a planning system. With the knowledge of the GUI and the way in which the user will interact with the GUI, the test designer creates sets of initial and goal states. Given these ini-tial and final states of the GUI, a hierarchical planner produces plans, or a set of test cases, that enable the goal state to be reached. Our technique has the ad-ditional benefit of putting verification commands into the test cases automatically. We implemented our tech-nique by developing the GUI analyzer and extending a planner. We generated test cases for Microsoft’s Word-Pad to demonstrate the viability and practicality of the approach

    A Regression Test Selection Technique for Graphical User Interfaces

    Get PDF
    Regression testing is a quality control measure to ensure that the newly modified part of the software still complies with its specified requirements and that the unmodified part has not been affected by the maintenance activity. Regression testing is an important and expensive activity during the software maintenance process and its purpose is to ensure quality and reliability in modified software. Regression testing selection techniques are focused on the reusability of existing test suites for a modified program from a previous version. Many regression testing selection techniques have been approached for conventional and object-oriented software. There is little discussion about those techniques to be applied for the Graphical User Interfaces (GUIs). This thesis addresses the gap. GUIs have characteristics different from traditional software, and the conventional testing techniques do not directly apply to GUIs. Unlike most previous techniques for selective retest, this thesis focuses on developing an event driven regression testing selection technique for GUIs. It defines an event dependence graph (EDG) to identify the interaction and relationship of the events within GUI components, develops an algorithm to construct the EDG for GUIs, and presents the GUI modeling structure and its selection retest technique. An algorithm is given to determine and generate a modified test suite automatically for GUI based on its original version. Experiments are presented on an implementation of this solution and discusses newly found challenges when applied to an established GUI application. Finally, feasibility and future areas of research are addressed on the findings during the implementation of the solution

    Large Scale Distributed Testing for Fault Classification and Isolation

    Get PDF
    Developing confidence in the quality of software is an increasingly difficult problem. As the complexity and integration of software systems increases, the tools and techniques used to perform quality assurance (QA) tasks must evolve with them. To date, several quality assurance tools have been developed to help ensure of quality in modern software, but there are still several limitations to be overcome. Among the challenges faced by current QA tools are (1) increased use of distributed software solutions, (2) limited test resources and constrained time schedules and (3) difficult to replicate and possibly rarely occurring failures. While existing distributed continuous quality assurance (DCQA) tools and techniques, including our own Skoll project, begin to address these issues, new and novel approaches are needed to address these challenges. This dissertation explores three strategies to do this. First, I present an improved version of our Skoll distributed quality assurance system. Skoll provides a platform for executing sophisticated, long-running QA processes across a large number of distributed, heterogeneous computing nodes. This dissertation details changes to Skoll resulting in a more robust, configurable, and user-friendly implementation for both the client and server components. Additionally, this dissertation details infrastructure development done to support the evaluation of DCQA processes using Skoll -- specifically the design and deployment of a dedicated 120-node computing cluster for evaluating DCQA practices. The techniques and case studies presented in the latter parts of this work leveraged the improvements to Skoll as their testbed. Second, I present techniques for automatically classifying test execution outcomes based on an adaptive-sampling classification technique along with a case study on the Java Architecture for Bytecode Analysis (JABA) system. One common need for these techniques is the ability to distinguish test execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which conditions a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). In this work, I present an empirical study on JABA where we automatically classified execution data into passing and failing behaviors using adaptive association trees. Finally, I present a long-term case study of the highly-configurable MySQL open-source project. Exhaustive testing of real-world software systems can involve configuration spaces that are too large to test exhaustively, but that nonetheless contain subtle interactions that lead to failure-inducing system faults. In the literature covering arrays, in combination with classification techniques, have been used to effectively sample these large configuration spaces and to detect problematic configuration dependencies. Applying this approach in practice, however, is tricky because testing time and resource availability are unpredictable. Therefore we developed and evaluated an alternative approach that incrementally builds covering array schedules. This approach begins at a low strength, and then iteratively increases strength as resources allow reusing previous test results to avoid duplicated effort. The results are test schedules that allow for successful classification with fewer test executions and that require less test-subject specific information to develop

    Automated Unit Testing of Evolving Software

    Get PDF
    As software programs evolve, developers need to ensure that new changes do not affect the originally intended functionality of the program. To increase their confidence, developers commonly write unit tests along with the program, and execute them after a change is made. However, manually writing these unit-tests is difficult and time-consuming, and as their number increases, so does the cost of executing and maintaining them. Automated test generation techniques have been proposed in the literature to assist developers in the endeavour of writing these tests. However, it remains an open question how well these tools can help with fault finding in practice, and maintaining these automatically generated tests may require extra effort compared to human written ones. This thesis evaluates the effectiveness of a number of existing automatic unit test generation techniques at detecting real faults, and explores how these techniques can be improved. In particular, we present a novel multi-objective search-based approach for generating tests that reveal changes across two versions of a program. We then investigate whether these tests can be used such that no maintenance effort is necessary. Our results show that overall, state-of-the-art test generation tools can indeed be effective at detecting real faults: collectively, the tools revealed more than half of the bugs we studied. We also show that our proposed alternative technique that is better suited to the problem of revealing changes, can detect more faults, and does so more frequently. However, we also find that for a majority of object-oriented programs, even a random search can achieve good results. Finally, we show that such change-revealing tests can be generated on demand in practice, without requiring them to be maintained over time

    Feedback-Directed Model-Based GUI Test Case Generation

    Get PDF
    Most of today's software users interact with the software through a graphical user interfac (GUI), which is a representative of the broader class of event-driven software (EDS). As the correctness of the GUI is necessary to ensure the correctness of the overall software, its quality assurance (QA) is becoming increasingly important. During software testing, an important QA technique, test cases are created and executed on the software. For GUIs, test cases are modeled as sequences of user input events. Because each possible sequence of user events may potentially be a test case and because today's GUIs offer enormous flexibility to end users, in principle, GUI testing requires a prohibitively large number of test cases. Any practical test case generation technique must sample the vast GUI input space. Existing techniques are either extremely resource intensive or do not adequately model complex GUI behaviors, thereby limiting fault detection. This research develops new models, algorithms, and metrics for automated GUI test case generation. A novel aspect of this work is its use of software runtime information collected as feedback during GUI test case execution, and used to generate additional test cases that model complex GUI behaviors. One set of empirical studies show that the feedback directed technique significantly improves upon existing techniques and helps to identify serious problems in fielded GUIs. Another set of studies conducted on in-house software applications show that the test suites generated by the new technique outperform their coverage equivalent counterparts in terms of fault detection. Although the focus of this work is on the GUI domain, the techniques developed are general and are applicable to the broader class of EDS. In fact, this work has already had an impact on research and practice of testing other EDS. In particular, the work has been extended by other researchers to test web applications
    corecore