11 research outputs found

    Practical Assessment Scheme to Service Selection for SOC-­based Applications

    Get PDF
    Service ­Oriented Computing promotes building applications by consuming reusable services. However, facing the selection of adequate services for a specific application still is a major challenge. Even with a reduced set of candidate services, the assessment effort could be overwhelming. On a previous work we have presented a novel approach to assist developers on discovery, selection and integration of services. This paper presents the selection method, which is based on a comprehensive scheme for services' interfaces compatibility. The scheme allows developers to gain knowledge on likely services' interactions and their required adaptations to achieve a positive integration. The scheme is also complemented by a framework based on black­box testing to verify compatibility on the expected behavior of a candidate service. The usefulness of the selection method is highlighted through a series of case studies.Sociedad Argentina de Informática e Investigación Operativ

    Assuring the model evolution of protocol software specifications by regression testing process improvement

    Get PDF
    A preliminary version of this paper has been presented at the 10th International Conference on Quality Software (QSIC 2010).Model-based testing helps test engineers automate their testing tasks so that they are more cost-effective. When the model is changed because of the evolution of the specification, it is important to maintain the test suites up to date for regression testing. A complete regeneration of the whole test suite from the new model, although inefficient, is still frequently used in the industry, including Microsoft. To handle specification evolution effectively, we propose a test case reusability analysis technique to identify reusable test cases of the original test suite based on graph analysis. We also develop a test suite augmentation technique to generate new test cases to cover the change-related parts of the new model. The experiment on four large protocol document testing projects shows that our technique can successfully identify a high percentage of reusable test cases and generate low-redundancy new test cases. When compared with a complete regeneration of the whole test suite, our technique significantly reduces regression testing time while maintaining the stability of requirement coverage over the evolution of requirements specifications. Copyright © 2011 John Wiley & Sons, Ltd.link_to_subscribed_fulltex

    Estimating and Improving the Performance of Prediction Models for Regression Test Selection

    Get PDF
    Researchers have proposed models to predict the percentage of the selected test cases when a Regression Test Selection (RTS) technique is used. One of the most successful and best performing RTS predictors is the Rosenblum and Weyuker (RW) coverage-based prediction model. However, previous evaluation results on RW predictor show that although it performs well on some subject programs, it deviates from actual percentage significantly on others. To understand the reason impacting RW predictor's performance, this research work presents a set of experiments on four factors that can potentially impact the RTS prediction performance. We setup two different set of experiments on several Java open-source test subjects and three RTS techniques. Our study on the effect of each factor on the RW performance reveals that large amount of code changes and significant code coverage overlaps between test cases are the two factors contributing to RW predictor's prediction error. Based on the experimental results and through regression analysis of the impacting factors, we propose a RW error estimator that can help testers and developers to gain a better understanding of RW predictor's confidence level and get insight into the applicability of the RW predictor to different organizations products and processes. To further improve RW predictor's performance, we propose an improved RW prediction model utilizing the error estimator to compensate prediction error. We also design a specific RTS improvement technique while presenting Harrold et al's improvement which also incorporates change history. Our experiments on these improved RW predictors demonstrate that they can reduce RW prediction error and improve performance

    COLLABORATIVE TESTING ACROSS SHARED SOFTWARE COMPONENTS

    Get PDF
    Large component-based systems are often built from many of the same components. As individual component-based software systems are developed, tested and maintained, these shared components are repeatedly manipulated. As a result there are often significant overlaps and synergies across and among the different test efforts of different component-based systems. However, in practice, testers of different systems rarely collaborate, taking a test-all-by-yourself approach. As a result, redundant effort is spent testing common components, and important information that could be used to improve testing quality is lost. The goal of this research is to demonstrate that, if done properly, testers of shared software components can save effort by avoiding redundant work, and can improve the test effectiveness for each component as well as for each component-based software system by using information obtained when testing across multiple components. To achieve this goal I have developed collaborative testing techniques and tools for developers and testers of component-based systems with shared components, applied the techniques to subject systems, and evaluated the cost and effectiveness of applying the techniques. The dissertation research is organized in three parts. First, I investigated current testing practices for component-based software systems to find the testing overlap and synergy we conjectured exists. Second, I designed and implemented infrastructure and related tools to facilitate communication and data sharing between testers. Third, I designed two testing processes to implement different collaborative testing algorithms and applied them to large actively developed software systems. This dissertation has shown the benefits of collaborative testing across component developers who share their components. With collaborative testing, researchers can design algorithms and tools to support collaboration processes, achieve better efficiency in testing configurations, and discover inter-component compatibility faults within a minimal time window after they are introduced

    A Unified Approach to Regression Testing for Mobile Apps

    Get PDF
    Mobile Applications have been widely used in recent years daily all over the world and are essential in our personal lives and at work. Because Mobile Applications update frequently, it is important that developers perform regression testing to ensure their quality. In addition, the Mobile Applications market has been growing rapidly, allowing anyone to write and publish an application without appropriate validation. A need for regression testing has arisen with the growth of different Mobile Apps and the added functionalities and complexities. In this dissertation, we adapted the FSMWeb [14] approach for selective regression testing to allow for selective regression testing of Mobile Apps. We applied rules to classify the original set of tests of the Mobile App into obsolete, retestable, and reusable tests based on the types of changes to the model of Mobile Apps. New tests are added to cover portions that have not been tested. As regression test suites change, we want to ensure that required tests are included to satisfy testing criteria, but also that redundant tests are removed, so as not to bloat the regression tests suite. In the dissertation, we developed a test case minimization approach for FSMApp, based on concept analysis that removes redundant test cases. Next, we proposed an approach to prioritize test cases for Mobile Apps. Naturally, it is desirable to select those test cases that are most likely to reveal defects in the App under test. We prioritized test paths for Mobile Apps based on input complexity, since more inputs might be associated with a more complex functionality which in turn would make it more fault-prone. As we knew, regression testing is an important activity in software maintenance and enhancement. Combining several regression testing techniques can lead to a more efficient and effective regression test suite. In this dissertation, we presented guidelines for combining regression testing approaches based on a systematic approach. We outlined all possible situations that can occur and showed how each of them influences which combination to use. Also, we validated the newly proposed regression testing approaches for Mobile Apps and the guidelines for combining regression testing approaches via a case study. The results show that FSMApp approaches are applicable, efficient, and effective

    A Bayesian Framework for Software Regression Testing

    Get PDF
    Software maintenance reportedly accounts for much of the total cost associated with developing software. These costs occur because modifying software is a highly error-prone task. Changing software to correct faults or add new functionality can cause existing functionality to regress, introducing new faults. To avoid such defects, one can re-test software after modifications, a task commonly known as regression testing. Regression testing typically involves the re-execution of test cases developed for previous versions. Re-running all existing test cases, however, is often costly and sometimes even infeasible due to time and resource constraints. Re-running test cases that do not exercise changed or change-impacted parts of the program carries extra cost and gives no benefit. The research community has thus sought ways to optimize regression testing by lowering the cost of test re-execution while preserving its effectiveness. To this end, researchers have proposed selecting a subset of test cases according to a variety of criteria (test case selection) and reordering test cases for execution to maximize a score function (test case prioritization). This dissertation presents a novel framework for optimizing regression testing activities, based on a probabilistic view of regression testing. The proposed framework is built around predicting the probability that each test case finds faults in the regression testing phase, and optimizing the test suites accordingly. To predict such probabilities, we model regression testing using a Bayesian Network (BN), a powerful probabilistic tool for modeling uncertainty in systems. We build this model using information measured directly from the software system. Our proposed framework builds upon the existing research in this area in many ways. First, our framework incorporates different information extracted from software into one model, which helps reduce uncertainty by using more of the available information, and enables better modeling of the system. Moreover, our framework provides flexibility by enabling a choice of which sources of information to use. Research in software measurement has proven that dealing with different systems requires different techniques and hence requires such flexibility. Using the proposed framework, engineers can customize their regression testing techniques to fit the characteristics of their systems using measurements most appropriate to their environment. We evaluate the performance of our proposed BN-based framework empirically. Although the framework can help both test case selection and prioritization, we propose using it primarily as a prioritization technique. We therefore compare our technique against other prioritization techniques from the literature. Our empirical evaluation examines a variety of objects and fault types. The results show that the proposed framework can outperform other techniques on some cases and performs comparably on the others. In sum, this thesis introduces a novel Bayesian framework for optimizing regression testing and shows that the proposed framework can help testers improve the cost effectiveness of their regression testing tasks

    Fail-Safe Test Generation of Safety Critical Systems

    Get PDF
    This dissertation introduces a technique for testing proper failure mitigation in safety critical systems. Unlike other approaches which integrate behavioral and failure models, and then generate tests from the integrated model, we build safety mitigation tests from an existing behavioral test suite, using an explicit mitigation model for which we generate mitigation paths which are then woven at selected failure points into the original test suite to create failure-mitigation tests (safety mitigation test)

    Improvements of and Extensions to FSMWeb: Testing Mobile Apps

    Get PDF
    A mobile application is a software program that runs on mobile device. In 2017, 178.1 billion mobile apps downloaded and the number is expected to grow to 258.2 billion app downloads in 2022 [19]. The number of app downloads poses a challenge for mobile application testers to find the right approach to test apps. This dissertation extends the FSMWeb approach for testing web applications [50] to test mobile applications (FSMApp). During the process of analyzing FSMWeb how it could be extended to test Mobile Apps, a number of shortcomings were detected which we improved upon. We discuss these first. We present an approach to generate black-box tests to test fail-safe behavior for web applications. We apply the approach to a large commercial web application. The approach uses a functional (behavioral) model to generate tests. It then determines at which states in the execution of behavioral test failures can occur and what mitigation requirements need to be tested. Mitigation requirements are used to build mitigation models for each failure type. From those mitigation models failure mitigation tests are generated. Next, this dissertation provides an approach for selective black-box model-based fail-safe regression testing for web applications. It classifies existing tests and test requirements as reusable, retestable, and obsolete. Removing reusable test requirements reduces test requirements between 49% to 65% in the case study. The approach also uses partial regeneration for new tests wherever possible. Third, we present the new FSMApp approach to test mobile applications and compare the approach with several other approaches [88, 37]. A number of case studies explore applicability, scalability, effectiveness, and efficiency of FSMApp with other approaches. Future work makes suggestion on how to improve test generation and execution efficiency with FSMApp
    corecore