38 research outputs found

    Finding Regressions in Projects under Version Control Systems

    Full text link
    Version Control Systems (VCS) are frequently used to support development of large-scale software projects. A typical VCS repository of a large project can contain various intertwined branches consisting of a large number of commits. If some kind of unwanted behaviour (e.g. a bug in the code) is found in the project, it is desirable to find the commit that introduced it. Such commit is called a regression point. There are two main issues regarding the regression points. First, detecting whether the project after a certain commit is correct can be very expensive as it may include large-scale testing and/or some other forms of verification. It is thus desirable to minimise the number of such queries. Second, there can be several regression points preceding the actual commit; perhaps a bug was introduced in a certain commit, inadvertently fixed several commits later, and then reintroduced in a yet later commit. In order to fix the actual commit it is usually desirable to find the latest regression point. The currently used distributed VCS contain methods for regression identification, see e.g. the git bisect tool. In this paper, we present a new regression identification algorithm that outperforms the current tools by decreasing the number of validity queries. At the same time, our algorithm tends to find the latest regression points which is a feature that is missing in the state-of-the-art algorithms. The paper provides an experimental evaluation of the proposed algorithm and compares it to the state-of-the-art tool git bisect on a real data set

    Regression test selection model: a comparison between ReTSE and pythia

    Get PDF
    As software systems change and evolve over time regression tests have to be run to validate these changes. Regression testing is an expensive but essential activity in software maintenance. The purpose of this paper is to compare a new regression test selection model called ReTSE with Pythia. The ReTSE model uses decomposition slicing in order to identify the relevant regression tests. Decomposition slicing provides a technique that is capable of identifying the unchanged parts of a system. Pythia is a regression test selection technique based on textual differencing. Both techniques are compare using a Power program taken from Vokolos and Frankl’s paper. The analysis of this comparison has shown promising results in reducing the number of tests to be run after changes are introduced

    Optimizing unit test execution in large software programs using dependency analysis

    Get PDF
    Tao is a system that optimizes the execution of unit tests in large software programs and reduces the programmer wait time from minutes to seconds. Tao is based on two key ideas: First, Tao focuses on efficiency, unlike past work that focused on avoiding false negatives. Tao implements simple and fast function-level dependency tracking that identifies tests to run on a code change; any false negatives missed by this dependency tracking are caught by running the entire test suite on a test server once the code change is committed. Second, to make it easy for programmers to adopt Tao, it incorporates the dependency information into the source code repository. This paper describes an early prototype of Tao and demonstrates that Tao can reduce unit test execution time in two large Python software projects by over 96% while incurring few false negatives.United States. Defense Advanced Research Projects Agency (DARPA Clean-slate design of Resilient, Adaptive, Secure Hosts (CRASH) program under contract #N66001-10-2-4089)National Science Foundation (U.S.) (NSF award CNS-1053143

    Using control flow analysis to improve the effectiveness of incremental mutation testing

    Get PDF
    Incremental Mutation Testing attempts to make mutation testing less expensive by applying it incrementally to a system as it evolves. This approach fits current trends of iterative software development with the main idea being that by carrying out mutation analysis in frequent bite-sized chunks focused on areas of the code which have changed, one can build confidence in the adequacy of a test suite incrementally. Yet this depends on how precisely one can characterise the effects of a change to a program. The original technique uses a naïve approach whereby changes are characterised only by syntactic changes. In this paper we propose bolstering incremental mutation testing by using control flow analysis to identify semantic repercussions which a syntactic change will have on a system. Our initial results based on two case studies demonstrate that numerous relevant mutants which would have otherwise not been considered using the naïve approach, are now being generated. However, the cost of identifying these mutants is significant when compared to the naïve approach, although it remains advantageous when compared to traditional mutation testing so long as the increment is sufficiently small.peer-reviewe

    Selective Regression Testing based on Big Data: Comparing Feature Extraction Techniques

    Get PDF
    Regression testing is a necessary activity in continuous integration (CI) since it provides confidence that modified parts of the system are correct at each integration cycle. CI provides large volumes of data which can be used to support regression testing activities. By using machine learning, patterns about faulty changes in the modified program can be induced, allowing test orchestrators to make inferences about test cases that need to be executed at each CI cycle. However, one challenge in using learning models lies in finding a suitable way for characterizing source code changes and preserving important information. In this paper, we empirically evaluate the effect of three feature extraction algorithms on the performance of an existing ML-based selective regression testing technique. We designed and performed an experiment to empirically investigate the effect of Bag of Words (BoW), Word Embeddings (WE), and content-based feature extraction (CBF). We used stratified cross validation on the space of features generated by the three FE techniques and evaluated the performance of three machine learning models using the precision and recall metrics. The results from this experiment showed a significant difference between the models\u27 precision and recall scores, suggesting that the BoW-fed model outperforms the other two models with respect to precision, whereas a CBF-fed model outperforms the rest with respect to recall

    A Mapping Study of scientific merit of papers, which subject are web applications test techniques, considering their validity threats

    Get PDF
    Progress in software engineering requires (1) more empirical studies of quality, (2) increased focus on synthesizing evidence, (3) more theories to be built and tested, and (4) the validity of the experiment is directly related with the level of confidence in the process of experimental investigation. This paper presents the results of a qualitative and quantitative classification of the threats to the validity of software engineering experiments comprising a total of 92 articles published in the period 2001-2015, dealing with software testing of Web applications. Our results show that 29.4% of the analyzed articles do not mention any threats to validity, 44.2% do it briefly, and 14% do it judiciously; that leaves a question: these studies have scientific value

    A review of slicing techniques in software engineering

    Get PDF
    Program slice is the part of program that may take the program off the path of the desired output at some point of its execution. Such point is known as the slicing criterion. This point is generally identified at a location in a given program coupled with the subset of variables of program. This process in which program slices are computed is called program slicing. Weiser was the person who gave the original definition of program slice in 1979. Since its first definition, many ideas related to the program slice have been formulated along with the numerous numbers of techniques to compute program slice. Meanwhile, distinction between the static slice and dynamic slice was also made. Program slicing is now among the most useful techniques that can fetch the particular elements of a program which are related to a particular computation. Quite a large numbers of variants for the program slicing have been analyzed along with the algorithms to compute the slice. Model based slicing split the large architectures of software into smaller sub models during early stages of SDLC. Software testing is regarded as an activity to evaluate the functionality and features of a system. It verifies whether the system is meeting the requirement or not. A common practice now is to extract the sub models out of the giant models based upon the slicing criteria. Process of model based slicing is utilized to extract the desired lump out of slice diagram. This specific survey focuses on slicing techniques in the fields of numerous programing paradigms like web applications, object oriented, and components based. Owing to the efforts of various researchers, this technique has been extended to numerous other platforms that include debugging of program, program integration and analysis, testing and maintenance of software, reengineering, and reverse engineering. This survey portrays on the role of model based slicing and various techniques that are being taken on to compute the slices
    corecore