339 research outputs found

    Improving regression testing transparency and efficiency with history-based prioritization – an industrial case study

    Get PDF
    Abstract—Background: History based regression testing was proposed as a basis for automating regression test selection, for the purpose of improving transparency and test efficiency, at the function test level in a large scale software development organization. Aim: The study aims at investigating the current manual regression testing process as well as adopting, implementing and evaluating the effect of the proposed method. Method: A case study was launched including: identification of important factors for prioritization and selection of test cases, implementation of the method, and a quantitative and qualitative evaluation. Results: 10 different factors, of which two are history-based, are identified as important for selection. Most of the information needed is available in the test management and error reporting systems while some is embedded in the process. Transparency is increased through a semi-automated method. Our quantitative evaluation indicates a possibility to improve efficiency, while the qualitative evaluation supports the general principles of history-based testing but suggests changes in implementation details

    A Mapping Study of scientific merit of papers, which subject are web applications test techniques, considering their validity threats

    Get PDF
    Progress in software engineering requires (1) more empirical studies of quality, (2) increased focus on synthesizing evidence, (3) more theories to be built and tested, and (4) the validity of the experiment is directly related with the level of confidence in the process of experimental investigation. This paper presents the results of a qualitative and quantitative classification of the threats to the validity of software engineering experiments comprising a total of 92 articles published in the period 2001-2015, dealing with software testing of Web applications. Our results show that 29.4% of the analyzed articles do not mention any threats to validity, 44.2% do it briefly, and 14% do it judiciously; that leaves a question: these studies have scientific value

    Exploring regression testing and software product line testing - research and state of practice

    Get PDF
    In large software organizations with a product line development approach a selective testing of product variants is necessary in order to keep pace with the decreased development time for new products, enabled by the systematic reuse. The close relationship between products in product line indicates an option to reduce the testing effort due to redundancy. In many cases test selection is performed manually, based on test leaders’ expertise. This makes the cost and quality of the testing highly dependent on the skills and experience of the test leaders. There is a need in industry for systematic approaches to test selection. The goal of our research is to improve the control of the testing and reduce the amount of redundant testing in the product line context by applying regression test selection strategies. In this thesis, the state of art of regression testing and software product line testing are explored. Two extensive systematic reviews are conducted as well as an industrial survey of regression testing state of practice and an industrial evaluation of a pragmatic regression test selection strategy. Regression testing is not an isolated one-off activity, but rather an activity of varying scope and preconditions, strongly dependent on the context in which it is applied. Several techniques for regression test selection are proposed and evaluated empirically but in many cases the context is too specific for a technique to be easily applied directly by software developers. In order to improve the possibility for generalizing empirical results on regression test selection, guidelines for reporting the testing context are discussed in this thesis. Software product line testing is a relatively new research area. The understanding about challenges is well established but when looking for solutions to these challenges, we mostly find proposals, and empirical evaluations are sparse. Regression test selection strategies proposed in literature are not easily applicable in the product line context. Instead, control may be increased by increased visibility of the effects of testing and proper measurements of software quality. Focus of our future work will be on how to guide the planning and assessment of regression testing activities in large, complex reuse based systems, by visualizing the quality achieved in different parts of the system and evaluating the effects of different selection strategies when applied in various regression testing situations

    Test case prioritization approaches in regression testing: A systematic literature review

    Get PDF
    Context Software quality can be assured by going through software testing process. However, software testing phase is an expensive process as it consumes a longer time. By scheduling test cases execution order through a prioritization approach, software testing efficiency can be improved especially during regression testing. Objective It is a notable step to be taken in constructing important software testing environment so that a system's commercial value can increase. The main idea of this review is to examine and classify the current test case prioritization approaches based on the articulated research questions. Method Set of search keywords with appropriate repositories were utilized to extract most important studies that fulfill all the criteria defined and classified under journal, conference paper, symposiums and workshops categories. 69 primary studies were nominated from the review strategy. Results There were 40 journal articles, 21 conference papers, three workshop articles, and five symposium articles collected from the primary studies. As for the result, it can be said that TCP approaches are still broadly open for improvements. Each approach in TCP has specified potential values, advantages, and limitation. Additionally, we found that variations in the starting point of TCP process among the approaches provide a different timeline and benefit to project manager to choose which approaches suite with the project schedule and available resources. Conclusion Test case prioritization has already been considerably discussed in the software testing domain. However, it is commonly learned that there are quite a number of existing prioritization techniques that can still be improved especially in data used and execution process for each approach

    Periodic Application of Concurrent Error Detection in Processor Array Architectures

    Get PDF
    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance

    Test case prioritization technique based on string distance metrics

    Get PDF
    Numerous test case prioritization (TCP) approaches have been introduced to enhance the test viability in software testing activity with the goal to maximize early average percentage fault detection (APFD). There are different approaches and the process for each approach varies. Furthermore, these approaches are not well documented within the single TCP approach. Based on current studies, having an approach that has high coverage effectiveness (CE) and APFD rate, remains a challenge in TCP. The string-based approach is known to have a single string distance based metric to differentiate test cases that can improve the CE results. However, to differentiate precisely the test cases, the string distances require enhancement. Therefore, a TCP technique based on string distance metric was developed to improve CE and APFD rate. In this research, to differentiate precisely the test cases and counter the string distances problem, an enhanced string distances based metric with a string weight based metric was introduced. Then, the metric was executed under designed process for string-based approach for complete evaluation. Experimental results showed that the enhanced string metric had the highest APFD with 98.56% and highest CE with 69.82% in Siemen dataset, cstcas. Besides, the technique yielded the highest APFD with 76.38% in Robotic Wheelchair System (RWS) case study. As a conclusion, the enhanced TCP technique with weight based metric has prioritised the test case based on their occurrences which helped to differentiate precisely the test cases, and improved the overall scores of APFD and CE

    A comparison of homonym meaning frequency estimates derived from movie and television subtitles, free association, and explicit ratings

    Get PDF
    First Online: 10 September 2018Most words are ambiguous, with interpretation dependent on context. Advancing theories of ambiguity resolution is important for any general theory of language processing, and for resolving inconsistencies in observed ambiguity effects across experimental tasks. Focusing on homonyms (words such as bank with unrelated meanings EDGE OF A RIVER vs. FINANCIAL INSTITUTION), the present work advances theories and methods for estimating the relative frequency of their meanings, a factor that shapes observed ambiguity effects. We develop a new method for estimating meaning frequency based on the meaning of a homonym evoked in lines of movie and television subtitles according to human raters. We also replicate and extend a measure of meaning frequency derived from the classification of free associates. We evaluate the internal consistency of these measures, compare them to published estimates based on explicit ratings of each meaning’s frequency, and compare each set of norms in predicting performance in lexical and semantic decision mega-studies. All measures have high internal consistency and show agreement, but each is also associated with unique variance, which may be explained by integrating cognitive theories of memory with the demands of different experimental methodologies. To derive frequency estimates, we collected manual classifications of 533 homonyms over 50,000 lines of subtitles, and of 357 homonyms across over 5000 homonym–associate pairs. This database—publicly available at: www.blairarmstrong.net/homonymnorms/—constitutes a novel resource for computational cognitive modeling and computational linguistics, and we offer suggestions around good practices for its use in training and testing models on labeled data

    Weighted string distance approach based on modified clustering technique for optimizing test case prioritization

    Get PDF
    Numerous test case prioritization (TCP) approaches have been introduced to enhance the test viability in software testing activity with the goal to maximize early average percentage fault detection (APFD). String based approach had shown that applying a single string distance-based metric to differentiate the test cases can improve the APFD and coverage rate (CR) results. However, to precisely differentiate the test cases in regression testing, the string approach still requires an enhancement as it lacks priority criteria. Therefore, a study on how to effectively cluster and prioritize test cases through string-based approach is conducted. To counter the string distances problem, weighted string distances is introduced. A further enhancement was made by tuning the weighted string metric with K-Means clustering and prioritization using Firefly Algorithm (FA) technique for the TCP approach to become more flexible in manipulating available information. Then, the combination of the weighted string distances along with clustering and prioritization is executed under the designed process for a new weighted string distances-based approach for complete evaluation. The experimental results show that all the weighted string distances obtained better results compared to its single string metric with average APFD values 95.73% and CR values 61.80% in cstcas Siemen dataset. As for the proposed weighted string distances approach with clustering techniques for regression testing, the combination obtained better results and flexibility than the conventional string approach. In addition, the proposed approach also passed statistical assessment by obtaining p-value higher than 0.05 in Shapiro-Wilk’s normality test and p-value lower than 0.05 in Tukey Kramer Post Hoc tests. In conclusion, the proposed weighted string distances approach improves the overall score of APFD and CE and provides flexibility in the TCP approach for regression testing environment
    corecore