438,663 research outputs found

    Regression Test Selection by Exclusion

    Get PDF
    This thesis addresses the research in the area of regression testing. Software systems change and evolve over time. Each time a system is changed regression tests have to be run to validate these changes. An important issue in regression testing is how to minimise reuse the existing test cases of original program for modied program. One of the techniques to tackle this issue is called regression test selection technique. The aim of this research is to signicantly reduce the number of test cases that need to be run after changes have been made. Specically, this thesis focuses on developing a model for regression test selection using the decomposition slicing technique. Decomposition slicing provides a technique that is capable of identifying the unchanged parts of the system. The model of regression test selection based on decomposition slicing and exclusion of test cases was developed in this thesis. The model is called Regression Test Selection by Exclusion (ReTSE) and has four main phases. They are Program Analysis, Comparison, Exclusion and Optimisation phases. The validity of the ReTSE model is explored through the application of a number of case studies. The case studies tackle all types of modication such as change, delete and add statements. The case studies have covered a single and combination types of modication at a time. The application of the proposed model has shown that signicant reductions in the number of test cases can be achieved. The evaluation of the model based on an existing framework and comparison with another model also has shown promising results. The case studies have limited themselves to relatively small programs and the next step is to apply the model to larger systems with more complex changes to ascertain if it scales up. While some parts of the model have been automated tools will be required for the rest when carrying out the larger case studies

    Regression test selection model: a comparison between ReTSE and pythia

    Get PDF
    As software systems change and evolve over time regression tests have to be run to validate these changes. Regression testing is an expensive but essential activity in software maintenance. The purpose of this paper is to compare a new regression test selection model called ReTSE with Pythia. The ReTSE model uses decomposition slicing in order to identify the relevant regression tests. Decomposition slicing provides a technique that is capable of identifying the unchanged parts of a system. Pythia is a regression test selection technique based on textual differencing. Both techniques are compare using a Power program taken from Vokolos and Franklā€™s paper. The analysis of this comparison has shown promising results in reducing the number of tests to be run after changes are introduced

    Causally Regularized Learning with Agnostic Data Selection Bias

    Full text link
    Most of previous machine learning algorithms are proposed based on the i.i.d. hypothesis. However, this ideal assumption is often violated in real applications, where selection bias may arise between training and testing process. Moreover, in many scenarios, the testing data is not even available during the training process, which makes the traditional methods like transfer learning infeasible due to their need on prior of test distribution. Therefore, how to address the agnostic selection bias for robust model learning is of paramount importance for both academic research and real applications. In this paper, under the assumption that causal relationships among variables are robust across domains, we incorporate causal technique into predictive modeling and propose a novel Causally Regularized Logistic Regression (CRLR) algorithm by jointly optimize global confounder balancing and weighted logistic regression. Global confounder balancing helps to identify causal features, whose causal effect on outcome are stable across domains, then performing logistic regression on those causal features constructs a robust predictive model against the agnostic bias. To validate the effectiveness of our CRLR algorithm, we conduct comprehensive experiments on both synthetic and real world datasets. Experimental results clearly demonstrate that our CRLR algorithm outperforms the state-of-the-art methods, and the interpretability of our method can be fully depicted by the feature visualization.Comment: Oral paper of 2018 ACM Multimedia Conference (MM'18

    TARIF VOLUME LOKAL POHON JATI (Tectona grandis) DI HUTAN KEMASYARAKATAN SEDYO RUKUN KABUPATEN GUNUNGKIDUL

    Get PDF
    The activity of Teak (Tectona grandis) forest management on HKm in Gunungkidul is one of the wood production sources. However, the wood volume estimator for production planning is unavailable yet. Therefore, it is necessary to prepare a wood volume estimator to facilitate production planning. The goal of this research is to create a regression equation for the relationship between circumference and volume and compiling the local volume rate (TVL) of Teak trees (Tectona grandis) in the Sedyo Rukun Community Forest (Hkm) area, Banyusoco Village, Playen, Gunungkidul, Special Region of Yogyakarta. The sampling technique that used was purposive sampling with the sample was in the form of felled trees volume. The volume of the tree was calculated by using the smalianĆ¢ā‚¬ā„¢s formula. TVL was arranged in some stages, data normality test, regression equation model, and validation test. The selection of the best model is considering into the criteria value of (coefficient of determination) RƂĀ², (standard error of estimation) Se, F test, chi-square (xƂĀ²), SA, SR, bias, and RMSE. Based on the research results, the best regression model is the power model with the formula V = with the value of R ƂĀ² is 0.865; SE 0,390; SA -2.97; SR 0.16%; Bias 1.76%; RMSE 2.68% and chi-square 3,03

    A novel approach for code smell detection : an empirical study

    Get PDF
    Code smells detection helps in improving understandability and maintainability of software while reducing the chances of system failure. In this study, six machine learning algorithms have been applied to predict code smells. For this purpose, four code smell datasets (God-class, Data-class, Feature-envy, and Long-method) are considered which are generated from 74 open-source systems. To evaluate the performance of machine learning algorithms on these code smell datasets, 10-fold cross validation technique is applied that predicts the model by partitioning the original dataset into a training set to train the model and test set to evaluate it. Two feature selection techniques are applied to enhance our prediction accuracy. The Chi-squared and Wrapper-based feature selection techniques are used to improve the accuracy of total six machine learning methods by choosing the top metrics in each dataset. Results obtained by applying these two feature selection techniques are compared. To improve the accuracy of these algorithms, grid search-based parameter optimization technique is applied. In this study, 100% accuracy was obtained for the Long-method dataset by using the Logistic Regression algorithm with all features while the worst performance 95.20 % was obtained by Naive Bayes algorithm for the Long-method dataset using the chi-square feature selection technique.publishedVersio

    Improving Software Quality by Synergizing Effective Code Inspection and Regression Testing

    Get PDF
    Software quality assurance is an essential practice in software development and maintenance. Evolving software systems consistently and safely is challenging. All changes to a system must be comprehensively tested and inspected to gain confidence that the modified system behaves as intended. To detect software defects, developers often conduct quality assurance activities, such as regression testing and code review, after implementing or changing required functionalities. They commonly evaluate a program based on two complementary techniques: dynamic program analysis and static program analysis. Using an automated testing framework, developers typically discover program faults by observing program execution with test cases that encode required program behavior as well as represent defects. Unlike dynamic analysis, developers make sure of the program correctness without executing a program by static analysis. They understand source code through manual inspection or identify potential program faults with an automated tool for statically analyzing a program. By removing the boundaries between static and dynamic analysis, complementary strengths and weaknesses of both techniques can create unified analyses. For example, dynamic analysis is efficient and precise but it requires selection of test cases without guarantee that the test cases cover all possible program executions, and static analysis is conservative and sound but it produces less precise results due to its approximation of all possible behaviors that may perform at run time. Many dynamic and static techniques have been proposed, but testing a program involves substantial cost and risks and inspecting code change is tedious and error-prone. Our research addresses two fundamental problems in dynamic and static techniques. (1) To evaluate a program, developers are typically required to implement test cases and reuse them. As they develop more test cases for verifying new implementations, the execution cost of test cases increases accordingly. After every modification, they periodically conduct regression test to see whether the program executes without introducing new faults in the presence of program evolution. To reduce the time required to perform regression testing, developers should select an appropriate subset of the test suite with a guarantee of revealing faults as running entire test cases. Such regression testing selection techniques are still challenging as these methods also have substantial costs and risks and discard test cases that could detect faults. (2) As a less formal and more lightweight method than running a test suite, developers often conduct code reviews based on tool support; however, understanding context and changes is the key challenge of code reviews. While reviewing code changesā€”addressing one single issueā€”might not be difficult, it is extremely difficult to understand complex changesā€”including multiple issues such as bug fixes, refactorings, and new feature additions. Developers need to understand intermingled changes addressing multiple development issues, finding which region of the code changes deals with a particular issue. Although such changes do not cause trouble in implementation, investigating these changes becomes time-consuming and error-prone since the intertwined changes are loosely related, leading to difficulty in code reviews. To address the limitations outlined above, our research makes the following contributions. First, we present a model-based approach to efficiently build a regression test suite that facilitates Extended Finite State Machines (EFSMs). Changes to the system are performed at transition level by adding, deleting or replacing transition. Tests are a sequence of input and expected output messages with concrete parameter values over the supported data types. Fully-observable tests are introduced whose descriptions contain all the information about the transitions executed by the tests. An invariant characterizing fully observable tests is formulated such that a test is fully-observable whenever the invariant is a satisfiable formula. Incremental procedures are developed to efficiently evaluate the invariant and to select tests from a test suite that are guaranteed to exercise a given change when the tests run on a modified EFSM. Tests rendered unusable due to a change are also identified. Overlaps among the test descriptions are exploited to extend the approach to simultaneously select and discard multiple tests to alleviate the test selection costs. Although test regression selection problem is NP-hard [78], the experimental results show the cost of our test selection procedure is still acceptable and economical. Second, to support code review and regression testing, we present a technique, called ChgCutter. It helps developers understand and validate composite changes as follows. It interactively decomposes these complex, composite changes into atomic changes, builds related change subsets using program dependence relationships without syntactic violation, and safely selects only related test cases from the test suite to reduce the time to conduct regression testing. When a code reviewer selects a change region from both original and changed versions of a program, ChgCutter automatically identifies similar change regions based on the dependence analysis and the tree-based code search technique. By automatically applying a change to the identified regions in an original program version, ChgCutter generates a program version which is a syntactically correct version of program. Given a generated program version, it leverages a testing selection technique to select and run a subset of the test suite affected by a change automatically separated from mixed changes. Based on the iterative change selection process, there can be each different program version that include its separated change. Therefore, ChgCutter helps code reviewers inspect large, complex changes by effectively focusing on decomposed change subsets. In addition to assisting understanding a substantial change, the regression testing selection technique effectively discovers defects by validating each program version that contains a separated change subset. In the evaluation, ChgCutter analyzes 28 composite changes in four open source projects. It identifies related change subsets with 95.7% accuracy, and it selects test cases affected by these changes with 89.0% accuracy. Our results show that ChgCutter should help developers effectively inspect changes and validate modified applications during development

    Investigation of similarity-based test case selection for specification-based regression testing.

    Get PDF
    uring software maintenance, several modiļ¬cations can be performed in a speciļ¬cation model in order to satisfy new requirements. Perform regression testing on modiļ¬ed software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where speciļ¬cation models were modiļ¬ed. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modiļ¬ed regions of a software systemā€™s speciļ¬cation model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modiļ¬ed elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator tools to create a space of models based on statistics from real industrial models, and eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SARTā€™s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modiļ¬cation and revealing defects linked to model modiļ¬cations. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment.During software maintenance, several modiļ¬cations can be performed in a speciļ¬cation model in order to satisfy new requirements. Perform regression testing on modiļ¬ed software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where speciļ¬cation models were modiļ¬ed. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modiļ¬ed regions of a software systemā€™s speciļ¬cation model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modiļ¬ed elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator tools to create a space of models based on statistics from real industrial models, and eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SARTā€™s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modiļ¬cation and revealing defects linked to model modiļ¬cations. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment

    DETERMINASI KUALITAS LABA PADA PERUSAHAAN YANG TERDAFTAR DI BURSA EFEK INDONESIA

    Get PDF
    This study aims to determine empirically the effect of capital structure, firm size, profit growth and liquidity as well as profitability and firm age as control variables on earnings quality in manufacturing sector companies listed on the Indonesia Stock Exchange for the 2017-2019 period. This research is a type of qualitative research that uses secondary data in the form of company annual reports. This study used a sample of 99 manufacturing companies. The technique for analyzing the data in this study uses a regression model selection test, classical assumption test, multiple linear regression, and partial hypothesis testing with STATA version 16. Based on the results of data analysis, it can be concluded that (1) capital structure, earnings growth and profitability have a negative effect on earnings quality, (2) firm size has a positive effect on earnings quality, (3) liquidity and firm age have no effect on earnings quality
    • ā€¦
    corecore