13,117 research outputs found

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Preemptive regression testing of workflow-based web services

    Get PDF
    published_or_final_versio

    Is XML-based test case prioritization for validating WS-BPEL evolution effective in both average and adverse scenarios?

    Get PDF
    In real life, a tester can only afford to apply one test case prioritization technique to one test suite against a service-oriented workflow application once in the regression testing of the application, even if it results in an adverse scenario such that the actual performance in the test session is far below the average. It is unclear whether the factors of test case prioritization techniques known to be significant in terms of average performance can be extrapolated to adverse scenarios. In this paper, we examine whether such a factor or technique may consistently affect the rate of fault detection in both the average and adverse scenarios. The factors studied include prioritization strategy, artifacts to provide coverage data, ordering direction of a strategy, and the use of executable and non-executable artifacts. The results show that only a minor portion of the 10 studied techniques, most of which are based on the iterative strategy, are consistently effective in both average and adverse scenarios. To the best of our know-ledge, this paper presents the first piece of empirical evidence regarding the consistency in the effectiveness of test case prioritization techniques and factors of service-oriented workflow applications between average and adverse scenarios.published_or_final_versio

    Preemptive regression test scheduling strategies: a new testing approach to thriving on the volatile service environments

    Get PDF
    A workflow-based web service may use ultra-late binding to invoke external web services to concretize its implementation at run time. Nonetheless, such external services or the availability of recently used external services may evolve without prior notification, dynamically triggering the workflow-based service to bind to new replacement external services to continue the current execution. Any integration mismatch may cause a failure. In this paper, we propose Preemptive Regression Testing (PRT), a novel testing approach that addresses this adaptive issue. Whenever such a late-change on the service under regression test is detected, PRT preempts the currently executed regression test suite, searches for additional test cases as fixes, runs these fixes, and then resumes the execution of the regression test suite from the preemption point. © 2012 IEEE |postprin

    A subsumption hierarchy of test case prioritization for composite services

    Get PDF
    published_or_final_versio

    Time-Space Efficient Regression Testing for Configurable Systems

    Full text link
    Configurable systems are those that can be adapted from a set of options. They are prevalent and testing them is important and challenging. Existing approaches for testing configurable systems are either unsound (i.e., they can miss fault-revealing configurations) or do not scale. This paper proposes EvoSPLat, a regression testing technique for configurable systems. EvoSPLat builds on our previously-developed technique, SPLat, which explores all dynamically reachable configurations from a test. EvoSPLat is tuned for two scenarios of use in regression testing: Regression Configuration Selection (RCS) and Regression Test Selection (RTS). EvoSPLat for RCS prunes configurations (not tests) that are not impacted by changes whereas EvoSPLat for RTS prunes tests (not configurations) which are not impacted by changes. Handling both scenarios in the context of evolution is important. Experimental results show that EvoSPLat is promising. We observed a substantial reduction in time (22%) and in the number of configurations (45%) for configurable Java programs. In a case study on a large real-world configurable system (GCC), EvoSPLat reduced 35% of the running time. Comparing EvoSPLat with sampling techniques, 2-wise was the most efficient technique, but it missed two bugs whereas EvoSPLat detected all bugs four times faster than 6-wise, on average.Comment: 14 page
    • …
    corecore