906 research outputs found

    Proceedings of the Eighth Annual Software Engineering Workshop

    Get PDF
    The four major topics of discussion included: the NASA Software Engineering Laboratory, software testing, human factors in software engineering and software quality assessment. As in the past years, there were 12 position papers presented (3 for each topic) followed by questions and very heavy participation by the general audience

    Quality-Aware Learning to Prioritize Test Cases

    Get PDF
    Software applications evolve at a rapid rate because of continuous functionality extensions, changes in requirements, optimization of code, and fixes of faults. Moreover, modern software is often composed of components engineered with different programming languages by different internal or external teams. During this evolution, it is crucial to continuously detect unintentionally injected faults and continuously release new features. Software testing aims at reducing this risk by running a certain suite of test cases regularly or at each change of the source code. However, the large number of test cases makes it infeasible to run all test cases. Automated test case prioritization and selection techniques have been studied in order to reduce the cost and improve the efficiency of testing tasks. However, the current state-of-art techniques remain limited in some aspects. First, the existing test prioritization and selection techniques often assume that faults are equally distributed across the software components, which can lead to spending most of the testing budget on components less likely to fail rather than the ones highly to contain faults. Second, the existing techniques share a scalability problem not only in terms of the size of the selected test suite but also in terms of the round-trip time between code commits and engineer feedback on test cases failures in the context of Continuous Integration (CI) development environments. Finally, it is hard to algorithmically capture the domain knowledge of the human testers which is crucial in testing and release cycles. This thesis is a new take on the old problem of reducing the cost of software testing in these regards by presenting a data-driven lightweight approach for test case prioritization and execution scheduling that is being used (i) during CI cycles for quick and resource-optimal feedback to engineers, and (ii) during release planning by capturing the testers domain knowledge and release requirements. Our approach combines software quality metrics with code churn metrics to build a regressive model that predicts the fault density of each component and a classification model to discriminate faulty from non-faulty components. Both models are used to guide the testing effort to the components likely to contain the largest number of faults. The predictive models have been validated on eight industrial automotive software applications at Daimler, showing a classification accuracy of 89% and an accuracy of 85.7% for the regression model. The thesis develops a test cases prioritization model based on features of the code change, the tests execution history and the component development history. The model reduces the cost of CI by predicting whether a particular code change should trigger the individual test suites and their corresponding test cases. In order to algorithmically capture the domain knowledge and the preferences of the tester, our approach developed a test case execution scheduling model that consumes the testers preferences in the form of a probabilistic graph and solves the optimal test budget allocation problem both online in the context of CI cycles and offline when planning a release. Finally, the thesis presents a theoretical cost model that describes when our prioritization and scheduling approach is worthwhile. The overall approach is validated on two industrial analytical applications in the area of energy management and predictive maintenance, showing that over 95% of the test failures are still reported back to the engineers while only 43% of the total available test cases are being executed

    Real-time Prediction of Cascading Failures in Power Systems

    Get PDF
    Blackouts in power systems cause major financial and societal losses, which necessitate devising better prediction techniques that are specifically tailored to detecting and preventing them. Since blackouts begin as a cascading failure (CF), an early detection of these CFs gives the operators ample time to stop the cascade from propagating into a large-scale blackout. In this thesis, a real-time load-based prediction model for CFs using phasor measurement units (PMUs) is proposed. The proposed model provides load-based predictions; therefore, it has the advantages of being applicable as a controller input and providing the operators with better information about the affected regions. In addition, it can aid in visualizing the effects of the CF on the grid. To extend the functionality and robustness of the proposed model, prediction intervals are incorporated based on the convergence width criterion (CWC) to allow the model to account for the uncertainties of the network, which was not available in previous works. Although this model addresses many issues in previous works, it has limitations in both scalability and capturing of transient behaviours. Hence, a second model based on recurrent neural network (RNN) long short-term memory (LSTM) ensemble is proposed. The RNN-LSTM is added to better capture the dynamics of the power system while also giving faster responses. To accommodate for the scalability of the model, a novel selection criterion for inputs is introduced to minimize the inputs while maintaining a high information entropy. The criteria include distance between buses as per graph theory, centrality of the buses with respect to fault location, and the information entropy of the bus. These criteria are merged using higher statistical moments to reflect the importance of each bus and generate indices that describe the grid with a smaller set of inputs. The results indicate that this model has the potential to provide more meaningful and accurate results than what is available in the previous literature and can be used as part of the integrated remedial action scheme (RAS) system either as a warning tool or a controller input as the accuracy of detecting affected regions reached 99.9% with a maximum delay of 400 ms. Finally, a validation loop extension is introduced to allow the model to self-update in real-time using importance sampling and case-based reasoning to extend the practicality of the model by allowing it to learn from historical data as time progresses

    EA-BJ-03

    Get PDF

    Quality-Aware Learning to Prioritize Test Cases

    Get PDF
    Software applications evolve at a rapid rate because of continuous functionality extensions, changes in requirements, optimization of code, and fixes of faults. Moreover, modern software is often composed of components engineered with different programming languages by different internal or external teams. During this evolution, it is crucial to continuously detect unintentionally injected faults and continuously release new features. Software testing aims at reducing this risk by running a certain suite of test cases regularly or at each change of the source code. However, the large number of test cases makes it infeasible to run all test cases. Automated test case prioritization and selection techniques have been studied in order to reduce the cost and improve the efficiency of testing tasks. However, the current state-of-art techniques remain limited in some aspects. First, the existing test prioritization and selection techniques often assume that faults are equally distributed across the software components, which can lead to spending most of the testing budget on components less likely to fail rather than the ones highly to contain faults. Second, the existing techniques share a scalability problem not only in terms of the size of the selected test suite but also in terms of the round-trip time between code commits and engineer feedback on test cases failures in the context of Continuous Integration (CI) development environments. Finally, it is hard to algorithmically capture the domain knowledge of the human testers which is crucial in testing and release cycles. This thesis is a new take on the old problem of reducing the cost of software testing in these regards by presenting a data-driven lightweight approach for test case prioritization and execution scheduling that is being used (i) during CI cycles for quick and resource-optimal feedback to engineers, and (ii) during release planning by capturing the testers domain knowledge and release requirements. Our approach combines software quality metrics with code churn metrics to build a regressive model that predicts the fault density of each component and a classification model to discriminate faulty from non-faulty components. Both models are used to guide the testing effort to the components likely to contain the largest number of faults. The predictive models have been validated on eight industrial automotive software applications at Daimler, showing a classification accuracy of 89% and an accuracy of 85.7% for the regression model. The thesis develops a test cases prioritization model based on features of the code change, the tests execution history and the component development history. The model reduces the cost of CI by predicting whether a particular code change should trigger the individual test suites and their corresponding test cases. In order to algorithmically capture the domain knowledge and the preferences of the tester, our approach developed a test case execution scheduling model that consumes the testers preferences in the form of a probabilistic graph and solves the optimal test budget allocation problem both online in the context of CI cycles and offline when planning a release. Finally, the thesis presents a theoretical cost model that describes when our prioritization and scheduling approach is worthwhile. The overall approach is validated on two industrial analytical applications in the area of energy management and predictive maintenance, showing that over 95% of the test failures are still reported back to the engineers while only 43% of the total available test cases are being executed

    Availability estimation and management for complex processing systems

    Get PDF
    “Availability” is the terminology used in asset intensive industries such as petrochemical and hydrocarbons processing to describe the readiness of equipment, systems or plants to perform their designed functions. It is a measure to suggest a facility’s capability of meeting targeted production in a safe working environment. Availability is also vital as it encompasses reliability and maintainability, allowing engineers to manage and operate facilities by focusing on one performance indicator. These benefits make availability a very demanding and highly desired area of interest and research for both industry and academia. In this dissertation, new models, approaches and algorithms have been explored to estimate and manage the availability of complex hydrocarbon processing systems. The risk of equipment failure and its effect on availability is vital in the hydrocarbon industry, and is also explored in this research. The importance of availability encouraged companies to invest in this domain by putting efforts and resources to develop novel techniques for system availability enhancement. Most of the work in this area is focused on individual equipment compared to facility or system level availability assessment and management. This research is focused on developing an new systematic methods to estimate system availability. The main focus areas in this research are to address availability estimation and management through physical asset management, risk-based availability estimation strategies, availability and safety using a failure assessment framework, and availability enhancement using early equipment fault detection and maintenance scheduling optimization

    Nuclear Power

    Get PDF
    The world of the twenty first century is an energy consuming society. Due to increasing population and living standards, each year the world requires more energy and new efficient systems for delivering it. Furthermore, the new systems must be inherently safe and environmentally benign. These realities of today's world are among the reasons that lead to serious interest in deploying nuclear power as a sustainable energy source. Today's nuclear reactors are safe and highly efficient energy systems that offer electricity and a multitude of co-generation energy products ranging from potable water to heat for industrial applications. The goal of the book is to show the current state-of-the-art in the covered technical areas as well as to demonstrate how general engineering principles and methods can be applied to nuclear power systems

    Modeling and Simulation in Engineering

    Get PDF
    The general aim of this book is to present selected chapters of the following types: chapters with more focus on modeling with some necessary simulation details and chapters with less focus on modeling but with more simulation details. This book contains eleven chapters divided into two sections: Modeling in Continuum Mechanics and Modeling in Electronics and Engineering. We hope our book entitled "Modeling and Simulation in Engineering - Selected Problems" will serve as a useful reference to students, scientists, and engineers
    corecore