13,223 research outputs found

    Improving regression testing efficiency and reliability via test-suite transformations

    Get PDF
    As software becomes more important and ubiquitous, high quality software also becomes crucial. Developers constantly make changes to improve software, and they rely on regression testing—the process of running tests after every change—to ensure that changes do not break existing functionality. Regression testing is widely used both in industry and in open source, but it suffers from two main challenges. (1) Regression testing is costly. Developers run a large number of tests in the test suite after every change, and changes happen very frequently. The cost is both in the time developers spend waiting for the tests to finish running so that developers know whether the changes break existing functionality, and in the monetary cost of running the tests on machines. (2) Regression test suites contain flaky tests, which nondeterministically pass or fail when run on the same version of code, regardless of any changes. Flaky test failures can mislead developers into believing that their changes break existing functionality, even though those tests can fail without any changes. Developers will therefore waste time trying to debug non existent faults in their changes. This dissertation proposes three lines of work that address these challenges of regression testing through test-suite transformations that modify test suites to make them more efficient or more reliable. Specifically, two lines of work explore how to reduce the cost of regression testing and one line of work explores how to fix existing flaky tests. First, this dissertation investigates the effectiveness of test-suite reduction (TSR), a traditional test-suite transformation that removes tests deemed redundant with respect to other tests in the test suite based on heuristics. TSR outputs a smaller, reduced test suite to be run in the future. However, TSR risks removing tests that can potentially detect faults in future changes. While TSR was proposed over two decades ago, it was always evaluated using program versions with seeded faults. Such evaluations do not precisely predict the effectiveness of the reduced test suite on the future changes. This dissertation evaluates TSR in a real-world setting using real software evolution with real test failures. The results show that TSR techniques proposed in the past are not as effective as suggested by traditional TSR metrics, and those same metrics do not predict how effective a reduced test suite is in the future. Researchers need to either propose new TSR techniques that produce more effective reduced test suites or better metrics for predicting the effectiveness of reduced test suites. Second, this dissertation proposes a new transformation to improve regression testing cost when using a modern build system by optimizing the placement of tests, implemented in a technique called TestOptimizer. Modern build systems treat a software project as a group of inter-dependent modules, including test modules that contain only tests. As such, when developers make a change, the build system can use a developer-specified dependency graph among modules to determine which test modules are affected by any changed modules and to run only tests in the affected test modules. However, wasteful test executions are a problem when using build systems this way. Suboptimal placements of tests, where developers may place some tests in a module that has more dependencies than the test actually needs, lead to running more tests than necessary after a change. TestOptimizer analyzes a project and proposes moving tests to reduce the number of test executions that are triggered over time due to developer changes. Evaluation of TestOptimizer on five large proprietary projects at Microsoft shows that the suggested test movements can reduce 21.7 million test executions (17.1%) across all evaluation projects. Developers accepted and intend to implement 84.4% of the reported suggestions. Third, to make regression testing more reliable, this dissertation proposes iFixFlakies, a framework for fixing a prominent kind of flaky tests: order dependent tests. Order-dependent tests pass or fail depending on the order in which the tests are run. Intuitively, order-dependent tests fail either because they need another test to set up the state for them to pass, or because some other test pollutes the state before they are run, and the polluted state makes them fail. The key insight behind iFixFlakies is that test suites often already have tests, which we call helpers, that contain the logic for setting/resetting the state needed for order-dependent tests to pass. iFixFlakies searches a test suite for these helpers and then recommends patches for order-dependent tests using code from the helpers. Evaluation of iFixFlakies on 137 truly order-dependent tests from a public dataset shows that 81 of them have helpers, and iFixFlakies can fix all 81. Furthermore, among our GitHub pull requests for 78 of these order dependent tests (3 of 81 had been already fixed), developers accepted 38; the remaining ones are still pending, and none are rejected so far

    Optimization of Sensor Location in Data Center

    Get PDF
    The increase demand of data center has been increase significantly due to the rapid growth ICT technology. As a result this brings along the “green” issues in data center such as energy consumption, heat generation and cooling requirements. These issues can be addressed by “Green of/by IT” in the context of operating costs as well as the environmental impacts. To accommodate temperature monitoring system in every corner of data center is cost inefficient. Optimized location for sensor placement is needed to be determined, to reduce the monitoring cost. It needs to be decided which locations to observe in order to most effective results, at minimum cost. Furthermore, it is argued that in depth knowledge of the historical data of the data center’s highly dynamic operating condition will lead to a better management of data center resources. Thus, this project aims to create a wireless temperature monitoring system with location optimization algorithm to optimize temperature sensors deployment/locations. Furthermore, real-time temperature data collection and monitoring can be used to predict the next state of the temperature to detect potential anomaly in heat generation in the data center. Thus quick response for cooling can be invoked – Green by IT

    Simultaneous optimization of decisions using a linear utility function

    Get PDF
    The purpose of this paper is to simultaneously optimize decision rules for combinations of elementary decisions. As a result of this approach, rules are found that make more efficient use of the data than does optimizing those decisions separately. The framework for the approach is derived from empirical Bayesian theory. To illustrate the approach, two elementary decisions--selection and mastery decisions--are combined into a simple decision network. A linear utility structure is assumed. Decision rules are derived both for quota-free and quota-restricted selection-mastery decisions for several subpopulations. An empirical example of instructional decision making in an individual study system concludes the paper. The example involves 43 freshmen medical students (27 were disadvantaged and 16 were advantaged with respect to elementary medical knowledge). Both the selection and mastery tests consisted of 17 free-response items on elementary medical knowledge with test scores ranging from 0 to 100. The treatment consisted of a computer-aided instructional program

    Simultaneous optimization of decisions using a linear utility function

    Get PDF
    The purpose of this article is to simultaneously optimize decision rules for combinations of elementary decisions. With this approach, rules are found that make more efficient use of the data than could be achieved by optimizing these decisions separately. The framework for the approach is derived from Bayesian decision theory. To illustrate the approach, two elementary decisions (selection and mastery decisions) are combined into a simple decision network. A linear utility structure is assumed. Decision rules are derived both for quota-free and quota-restricted selection-mastery decisions in case of several subpopulations. An empirical example of instructional decision making in an individual study system concludes the article

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    ASC: A stream compiler for computing with FPGAs

    No full text
    Published versio
    • …
    corecore