21 research outputs found

    Simulation Techniques for Determining Numbers of Programmers in the Process of Software Testing

    Get PDF
    One of the existing problems in the body knowledge of software engineering is inappropriate numbers of programmers working through the software-development life cycle, particularly in the process of coding, testing, and maintenance. If the numbers are large, then the cost of development software will increase. However, the small teams cause another problem, especially in the process of deployment. Therefore, this article presents the simulation techniques for the development team in order to determine the appropriate numbers of programmer, specifically in the process of software testing, including the percent errors that can be occurred during maintenance. Firstly, the relationship among programmers, codes, and testing time are constructed and studied. Secondly, it is the application based simulation techniques for determining the suitable numbers of programmers whereas twenty experiments are organized. Lastly, the percent errors from seeded bugs are generated by 50 experiments. The contribution of this paper is not only managing the whole phases of software-development life cycle, but it also guarantees the accuracy of testing software by improving the percent errors

    Selective Regression Testing based on Big Data: Comparing Feature Extraction Techniques

    Get PDF
    Regression testing is a necessary activity in continuous integration (CI) since it provides confidence that modified parts of the system are correct at each integration cycle. CI provides large volumes of data which can be used to support regression testing activities. By using machine learning, patterns about faulty changes in the modified program can be induced, allowing test orchestrators to make inferences about test cases that need to be executed at each CI cycle. However, one challenge in using learning models lies in finding a suitable way for characterizing source code changes and preserving important information. In this paper, we empirically evaluate the effect of three feature extraction algorithms on the performance of an existing ML-based selective regression testing technique. We designed and performed an experiment to empirically investigate the effect of Bag of Words (BoW), Word Embeddings (WE), and content-based feature extraction (CBF). We used stratified cross validation on the space of features generated by the three FE techniques and evaluated the performance of three machine learning models using the precision and recall metrics. The results from this experiment showed a significant difference between the models\u27 precision and recall scores, suggesting that the BoW-fed model outperforms the other two models with respect to precision, whereas a CBF-fed model outperforms the rest with respect to recall

    A new concept of effective regression test generation in a C++ specific environment

    Get PDF
    During regression testing test cases from an existing test suite are run against a modified version of a program in order to assure that the underlying modifications do not cause any side effects that would demolish the integrity and consistency of the system. Since the ultimate goal of a regression test set is to effectively test all modifications and reveal errors in the earliest possible stage, the maintenance of a relevant test set containing effective test cases is of utmost importance. In this paper we present an efficient, C++ specific framework to automatically manage the regression test suite. Our two main contributions are a new interpretation of reliable test cases and a dynamic forward impact analyzer method that eases the transformation of existing tests to meet the definition of reliability. Using this approach we complement the test set with test cases that pass through a modification and have an impact on at least one output. Our approach is designed to be applicable to large-scale applications

    Evolving legacy system features into fine-grained components

    Get PDF

    A systematic review on regression test selection techniques

    Get PDF
    Regression testing is verifying that previously functioning software remains after a change. With the goal of finding a basis for further research in a joint industry-academia research project, we conducted a systematic review of empirical evaluations of regression test selection techniques. We identified 27 papers reporting 36 empirical studies, 21 experiments and 15 case studies. In total 28 techniques for regression test selection are evaluated. We present a qualitative analysis of the findings, an overview of techniques for regression test selection and related empirical evidence. No technique was found clearly superior since the results depend on many varying factors. We identified a need for empirical studies where concepts are evaluated rather than small variations in technical implementations

    Usage of Tests in an Open-Source Community: A Case Study with Pharo Developers

    Get PDF
    International audienceDuring the development, it is known that tests ensure the good behavior of applications and improve their quality. We studied developers testing behavior inside the Pharo community in the purpose to improve it. In this paper, we take inspiration from a paper of the literature to enhance our comprehension of test habits in our open source community. We report results of a field study on how often the developers use tests in their daily practice, whether they make use of tests selection and why they do. Results are strengthened by interviews with developers involved in the study. The main findings are that developers run tests every modifications of their code they did; most of the time they practice test selection (instead of launching an entire test suite); however they are not accurate in their selection; they change their selection depending on the duration of the tests and; contrary to expectation, test selection is not influenced by the size of the test suite

    Prioritization of Regression Tests using Singular Value Decomposition with Empirical Change Records

    Full text link
    During development and testing, changes made to a system to repair a detected fault can often inject a new fault into the code base. These injected faults may not be in the same files that were just changed, since the effects of a change in the code base can have ramifications in other parts of the system. We propose a methodology for determining the effect of a change and then prioritizing regression test cases by gathering software change records and analyzing them through singular value decomposition. This methodology generates clusters of files that historically tend to change together. Combining these clusters with test case information yields a matrix that can be multiplied by a vector representing a new system modification to create a prioritized list of test cases. We performed a post hoc case study using this technique with three minor releases of a software product at IBM. We found that our methodology suggested additional regression tests in 50 % of test runs and that the highest-priority suggested test found an additional fault 60 % of the time. 1

    What are the Testing Habits of Developers?: A Case Study in a Large IT Company

    Get PDF
    International audienceTests are considered important to ensure the good behavior of applications and improve their quality. But development in companies also involves tight schedules, old habits, less-trained developers, or practical difficulties such as creating a test database. As a result, good testing practices are not always used as often as one might wish. With a major IT company, we are engaged in a project to understand developers testing behavior, and whether it can be improved. Some ideas are to promote testing by reducing test session length, or by running automatically tests behind the scene and send warnings to developers about the failing ones. Reports on developers testing habits in the literature focus on highly distributed open-source projects, or involve students programmers. As such they might not apply to our industrial, closed source, context. In this paper, we take inspiration from experiments of two papers of the literature to enhance our comprehension of the industrial environment. We report the results of a field study on how often the developers use tests in their daily practice, whether they make use of tests selection and why they do. Results are reinforced by interviews with developers involved in the study. The main findings are that test practice is in better shape than we expected; developers select tests " ruthlessly " (instead of launching an entire test suite); although they are not accurate in their selection, and; contrary to expectation, test selection is not influenced by the size of the test suite nor the duration of the tests

    A Methodology to Support the Maintenance of Object -Oriented Systems Using Impact Analysis.

    Get PDF
    Object-Oriented (OO) systems are difficult to understand due to the complex nature of the relationships that object-orientation supports. Inheritance, polymorphism, encapsulation, information hiding, aggregation, and association combine to make maintenance of OO systems difficult. Due to the presence of these characteristics in OO systems, maintenance activities on OO systems often have unexpected or unseen effects on the system. These effects can ripple through system components, complicating maintenance and testing of the system. The ability to trace the effects of maintenance provides the maintainer with knowledge that assists in debugging and testing modified and affected components. In this research, we show that the architecture of an OO system provides an effective framework for determining the impact of system changes. We developed the Comparative Software Maintenance (CSM) methodology to support the maintenance of OO systems. Through this methodology, we model relationships and structures, analyze the models to determine components that change as a result of maintenance, and perform impact analysis to determine components that are candidates for re-testing as a result of maintenance activity. The methodology includes a new data model, called Extended Low-Level Software Architecture (ELLSA), that facilitates impact analysis. CSM locates potential side effects, ripple effects, and other effects of maintenance on class structures, methods, and objects. The comprehensive architecture model enables CSM to perform either predictive, pre-modification impact analysis or post-modification impact analysis. The improved impact analysis process found in the methodology determines impact of changes to the component level. We apply the results of impact analysis to determine component level testing requirements. CSM enhances program understanding through the use of ELLSA. It also provides assistance for capturing complex dependencies found in object-oriented code. The methodology is implemented in JFlex. The automation provided by JFlex makes the application of CSM feasible
    corecore