10 research outputs found
Simulation Techniques for Determining Numbers of Programmers in the Process of Software Testing
One of the existing problems in the body knowledge of software engineering is inappropriate numbers of programmers working through the software-development life cycle, particularly in the process of coding, testing, and maintenance. If the numbers are large, then the cost of development software will increase. However, the small teams cause another problem, especially in the process of deployment. Therefore, this article presents the simulation techniques for the development team in order to determine the appropriate numbers of programmer, specifically in the process of software testing, including the percent errors that can be occurred during maintenance. Firstly, the relationship among programmers, codes, and testing time are constructed and studied. Secondly, it is the application based simulation techniques for determining the suitable numbers of programmers whereas twenty experiments are organized. Lastly, the percent errors from seeded bugs are generated by 50 experiments. The contribution of this paper is not only managing the whole phases of software-development life cycle, but it also guarantees the accuracy of testing software by improving the percent errors
Empirical comparison of four Java-based regression test selection techniques, An
2020 Fall.Includes bibliographical references.Regression testing is crucial to ensure that previously tested functionality is not broken by additions, modifications, and deletions to the program code. Since regression testing is an expensive process, researchers have developed regression test selection (RTS) techniques, which select and execute only those test cases that are impacted by the code changes. In general, an RTS technique has two main activities, which are (1) determining dependencies between the source code and test cases, and (2) identifying the code changes. Different approaches exist in the research literature to compute dependencies statically or dynamically at different levels of granularity. Also, code changes can be identified at different levels of granularity using different techniques. As a result, RTS techniques possess different characteristics related to the amount of reduction in the test suite size, time to select and run the test cases, test selection accuracy, and fault detection ability of the selected subset of test cases. Researchers have empirically evaluated the RTS techniques, but the evaluations were generally conducted using different experimental settings. This thesis compares four recent Java-based RTS techniques, Ekstazi, HyRTS, OpenClover, and STARTS, with respect to the above-mentioned characteristics using multiple revisions from five open source projects. It investigates the relationship between four program features and the performance of RTS techniques: total (program and test suite) size in KLOC, total number of classes, percentage of test classes over the total number of classes, and the percentage of classes that changed between revisions. The results show that STARTS, a static RTS technique, over-estimates dependencies between test cases and program code, and thus, selects more test cases than the dynamic RTS techniques Ekstazi and HyRTS, even though all three identify code changes in the same way. OpenClover identifies code changes differently from Ekstazi, HyRTS, and STARTS, and selects more test cases. STARTS achieved the lowest safety violation with respect to Ekstazi, and HyRTS achieved the lowest precision violation with respect to both STARTS and Ekstazi. Overall, the average fault detection ability of the RTS techniques was 8.75% lower than that of the original test suite. STARTS, Ekstazi, and HyRTS achieved higher test suite size reduction on the projects with over 100 KLOC than those with less than 100 KLOC. OpenClover achieved a higher test suite size reduction in the subjects that had a fewer total number of classes. The time reduction of OpenClover is affected by the combination of the number of source classes and the number of test cases in the subjects. The higher the number of test cases and source classes, the lower the time reduction
Recommended from our members
An extensible framework for intermediate language based code instrumentation
Code instrumenters play a vital role in the functionality of the Aristotle program analysis system, This project aims to replace Aristotle's existing code instrumenters, which process target programs at the, source level, with instrumenters that process target programs at the intermediate language level. This change absolves the instruÂmenters of responsibility for the parsing function. Intermediate language processing confers numerous benefits, both to the quality of the instrumenter tools at the users' level, and to future developers' comprehension of the instrumenter software. This report describes the project and its results
Recommended from our members
Galileo - a system for analyzing Java bytecode database handlers subsystem
Analysis of programs forms an important activity in the field of software engineering. It is necessary to help understand the code, which facilitates comprehensive testing, maintenance and optimization of code. Aristotle is a tool for analyzing programs written in C. We have designed a system on similar lines for Java programs. In this project we have built core modules of the system; these include a control flow graph builder, instrumentor and utilities for viewing analysis data. A control flow graph of the program helps in enumerating the flow structure of the program. Instrumenting a program on the other hand helps identify blocks of code hit during execution. Building the control flow graph and instrumenting a Java program form the main purpose of the system- Galileo
Recommended from our members
Regression testing experiments and infrastructure
Like other scientific fields, computer science is in great need of experimentation to supÂport, improve, disprove and even establish theories. One area in which experimental results are highly desired involves regression testing, which is performed on modified software to provide confidence that the software behaves correctly and that modifications have not adÂversely impacted the software's quality. Generally, many of the problems related to regression testing involve complex, practical tradeoffs between costs and benefits, that cannot easily be evaluated analytically. Thus, empirical studies of regression testing are especially necessary.
vVe have conducted two experiments on regression testing topics: "an investigation of program spectra" and "an analysis of two safe regression test selection techniques". In addition to generating and analyzing the results for each experiment, we have also established a prototype infrastructure to facilitate experimentation with regression testing in particular, and experiments in software maintenance and testing in general.Keywords: experiment, empirical study, regression testing, program spectra, regression test selection
Search-Based Information Systems Migration: Case Studies on Refactoring Model Transformations
Information systems are built to last for decades; however, the reality suggests otherwise. Companies are often pushed to modernize their systems to reduce costs, meet new policies, improve the security, or to be more competitive. Model-driven engineering (MDE) approaches are used in several successful projects to migrate systems. MDE raises the level of abstraction for complex systems by relying on models as first-class entities. These models are maintained and transformed using model transformations (MT), which are expressed by means of transformation rules to transform models from source to target meta-models. The migration process for information systems may take years for large systems. Thus, many changes are going to be introduced to the transformations to reflect the new business requirements, fix bugs, or to meet the updated metamodels. Therefore, the quality of MT should be continually checked and improved during the evolution process to avoid future technical debts. Most MT programs are written as one large module due to the lack of refactoring/modularization and regression testing tools support. In object-oriented systems, composition and modularization are used to tackle the issues of maintainability and testability. Moreover, refactoring is used to improve the non-functional attributes of the software, making it easier and faster for developers to work and manipulate the code. Thus, we proposed an intelligent computational search approach to automatically modularize MT. Furthermore, we took inspiration from a well-defined quality assessment model for object-oriented design to propose a quality assessment model for MT in particular. The results showed a 45% improvement in the developer’s speed to detect or fix bugs, and developers made 40% less errors when performing a task with the optimized version. Since refactoring operations changes the transformation, it is important to apply regression testing to check their correctness and robustness. Thus, we proposed a multi-objective test case selection technique to find the best trade-off between coverage and computational cost. Results showed a drastic speed-up of the testing process while still showing a good testing performance. The survey with practitioners highlighted the need of such maintenance and evolution framework to improve the quality and efficiency of the existing migration process.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/149153/1/Bader Alkhazi Final Dissertation.pdfDescription of Bader Alkhazi Final Dissertation.pdf : Restricted to UM users only
Recommended from our members
Test case prioritization
Regression testing is an expensive software engineering activity intended to provide confidence that modifications to a software system have not introduced faults. Test case prioritization techniques help to reduce regression testing cost by ordering test cases in a way that better achieves testing objectives. In this thesis, we are interested in prioritizing to maximize a test suite's rate of fault detection, measured by a metric, APED, trying to detect regression faults as early as possible during testing. In previous work, several prioritization techniques using low-level code coverage information had been developed. These techniques try to maximize APED over a sequence of software releases, not targeting a particular release. These techniques' effectiveness was empirically evaluated. We present a larger set of prioritization techniques that use information at arbitrary granularity levels and incorporate modification information, targeting prioritization at a particular software release. Our empirical studies show significant improvements in the rate of fault detection over randomly ordered test suites. Previous work on prioritization assumed uniform test costs and fault seventies, which might not be realistic in many practical cases. We present a new cost-cognizant metric, APFD[subscript c], and prioritization techniques, together with approaches for measuring and estimating these costs. Our empirical studies evaluate prioritization in a cost-cognizant environment. Prioritization techniques have been developed independently with little consideration of their similarities. We present a general prioritization framework that allows us to express existing prioritization techniques by a framework algorithm using parameters and specific functions. Previous research assumed that prioritization was always beneficial if it improves the APFD metric. We introduce a prioritization cost-benefit model that more accurately captures relevant cost and benefit factors, and allows practitioners to assess whether it is economical to employ prioritization. Prioritization effectiveness varies across programs, versions, and test suites. We empirically investigate several of these factors on substantial software systems and present a classification-tree-based predictor that can help select the most appropriate prioritization technique in advance. Together, these results improve our understanding of test case prioritization and of the processes by which it is performed