10,055 research outputs found

    A FRAMEWORK FOR SOFTWARE RELIABILITY MANAGEMENT BASED ON THE SOFTWARE DEVELOPMENT PROFILE MODEL

    Get PDF
    Recent empirical studies of software have shown a strong correlation between change history of files and their fault-proneness. Statistical data analysis techniques, such as regression analysis, have been applied to validate this finding. While these regression-based models show a correlation between selected software attributes and defect-proneness, in most cases, they are inadequate in terms of demonstrating causality. For this reason, we introduce the Software Development Profile Model (SDPM) as a causal model for identifying defect-prone software artifacts based on their change history and software development activities. The SDPM is based on the assumption that human error during software development is the sole cause for defects leading to software failures. The SDPM assumes that when a software construct is touched, it has a chance to become defective. Software development activities such as inspection, testing, and rework further affect the remaining number of software defects. Under this assumption, the SDPM estimates the defect content of software artifacts based on software change history and software development activities. SDPM is an improvement over existing defect estimation models because it not only uses evidence from current project to estimate defect content, it also allows software managers to manage software projects quantitatively by making risk informed decisions early in software development life cycle. We apply the SDPM in several real life software development projects, showing how it is used and analyzing its accuracy in predicting defect-prone files and compare the results with the Poisson regression model

    Fault-Proneness Estimation and Java Migration: A Preliminary Case Study

    Get PDF
    The paper presents and discusses an industrial case study, where an eight year running software project has been analyzed. We collected about 1000 daily-versions, together with the file version control system, and bug tracking data. This project has been migrated from Java 1.4 to Java 1.5, and visible effects of this migration on the bytecode are presented and discussed. From this case study, we expect to observe the effects on the code size produced by the Java technology migration, and to improve the performances of already existing fault-proneness estimation models. Preliminary results about fault-proneness estimation are shown

    Prognostics and health management for maintenance practitioners - Review, implementation and tools evaluation.

    Get PDF
    In literature, prognostics and health management (PHM) systems have been studied by many researchers from many different engineering fields to increase system reliability, availability, safety and to reduce the maintenance cost of engineering assets. Many works conducted in PHM research concentrate on designing robust and accurate models to assess the health state of components for particular applications to support decision making. Models which involve mathematical interpretations, assumptions and approximations make PHM hard to understand and implement in real world applications, especially by maintenance practitioners in industry. Prior knowledge to implement PHM in complex systems is crucial to building highly reliable systems. To fill this gap and motivate industry practitioners, this paper attempts to provide a comprehensive review on PHM domain and discusses important issues on uncertainty quantification, implementation aspects next to prognostics feature and tool evaluation. In this paper, PHM implementation steps consists of; (1) critical component analysis, (2) appropriate sensor selection for condition monitoring (CM), (3) prognostics feature evaluation under data analysis and (4) prognostics methodology and tool evaluation matrices derived from PHM literature. Besides PHM implementation aspects, this paper also reviews previous and on-going research in high-speed train bogies to highlight problems faced in train industry and emphasize the significance of PHM for further investigations

    Software quality and reliability prediction using Dempster -Shafer theory

    Get PDF
    As software systems are increasingly deployed in mission critical applications, accurate quality and reliability predictions are becoming a necessity. Most accurate prediction models require extensive testing effort, implying increased cost and slowing down the development life cycle. We developed two novel statistical models based on Dempster-Shafer theory, which provide accurate predictions from relatively small data sets of direct and indirect software reliability and quality predictors. The models are flexible enough to incorporate information generated throughout the development life-cycle to improve the prediction accuracy.;Our first contribution is an original algorithm for building Dempster-Shafer Belief Networks using prediction logic. This model has been applied to software quality prediction. We demonstrated that the prediction accuracy of Dempster-Shafer Belief Networks is higher than that achieved by logistic regression, discriminant analysis, random forests, as well as the algorithms in two machine learning software packages, See5 and WEKA. The difference in the performance of the Dempster-Shafer Belief Networks over the other methods is statistically significant.;Our second contribution is also based on a practical extension of Dempster-Shafer theory. The major limitation of the Dempsters rule and other known rules of evidence combination is the inability to handle information coming from correlated sources. Motivated by inherently high correlations between early life-cycle predictors of software reliability, we extended Murphy\u27s rule of combination to account for these correlations. When used as a part of the methodology that fuses various software reliability prediction systems, this rule provided more accurate predictions than previously reported methods. In addition, we proposed an algorithm, which defines the upper and lower bounds of the belief function of the combination results. To demonstrate its generality, we successfully applied it in the design of the Online Safety Monitor, which fuses multiple correlated time varying estimations of convergence of neural network learning in an intelligent flight control system

    Addressing Complexity and Intelligence in Systems Dependability Evaluation

    Get PDF
    Engineering and computing systems are increasingly complex, intelligent, and open adaptive. When it comes to the dependability evaluation of such systems, there are certain challenges posed by the characteristics of “complexity” and “intelligence”. The first aspect of complexity is the dependability modelling of large systems with many interconnected components and dynamic behaviours such as Priority, Sequencing and Repairs. To address this, the thesis proposes a novel hierarchical solution to dynamic fault tree analysis using Semi-Markov Processes. A second aspect of complexity is the environmental conditions that may impact dependability and their modelling. For instance, weather and logistics can influence maintenance actions and hence dependability of an offshore wind farm. The thesis proposes a semi-Markov-based maintenance model called “Butterfly Maintenance Model (BMM)” to model this complexity and accommodate it in dependability evaluation. A third aspect of complexity is the open nature of system of systems like swarms of drones which makes complete design-time dependability analysis infeasible. To address this aspect, the thesis proposes a dynamic dependability evaluation method using Fault Trees and Markov-Models at runtime.The challenge of “intelligence” arises because Machine Learning (ML) components do not exhibit programmed behaviour; their behaviour is learned from data. However, in traditional dependability analysis, systems are assumed to be programmed or designed. When a system has learned from data, then a distributional shift of operational data from training data may cause ML to behave incorrectly, e.g., misclassify objects. To address this, a new approach called SafeML is developed that uses statistical distance measures for monitoring the performance of ML against such distributional shifts. The thesis develops the proposed models, and evaluates them on case studies, highlighting improvements to the state-of-the-art, limitations and future work

    Improved wind turbine monitoring using operational data

    Get PDF
    With wind energy becoming a major source of energy, there is a pressing need to reduce all associated costs to be competitive in a market that might be fully subsidy-free in the near future. Before thousands of wind turbines were installed all over the world, research in e.g. understanding aerodynamics, developing new materials, designing better gearboxes, improving power electronics etc., helped to cut down wind turbine manufacturing costs. It might be assumed, that this would be sufficient to reduce the costs of wind energy as the resource, the wind itself, is free of costs. However, it has become clear that the operation and maintenance of wind turbines contributes significantly to the overall cost of energy. Harsh environmental conditions and the frequently remote locations of the turbines makes maintenance of wind turbines challenging. Just recently, the industry realised that a move from reactive and scheduled maintenance towards preventative or condition-based maintenance will be crucial to further reduce costs. Knowing the condition of the wind turbine is key for any optimisation of operation and maintenance. There are various possibilities to install advanced sensors and monitoring systems developed in recent years. However, these will inevitably incur new costs that need to be worthwhile and retro-fits to existing turbines might not always be feasible. In contrast, this work focuses on ways to use operational data as recorded by the turbine's Supervisory Control And Data Acquisition (SCADA) system, which is installed in all modern wind turbines for operating purposes -- without additional costs. SCADA data usually contain information about the environmental conditions (e.g. wind speed, ambient temperature), the operation of the turbine (power production, rotational speed, pitch angle) and potentially the system's health status (temperatures, vibration). These measurements are commonly recorded in ten-minutely averages and might be seen as indirect and top-level information about the turbine's condition. Firstly, this thesis discusses the use of operational data to monitor the power performance to assess the overall efficiency of wind turbines and to analyse and optimise maintenance. In a sensitivity study, the financial consequences of imperfect maintenance are evaluated based on case study data and compared with environmental effects such as blade icing. It is shown how decision-making of wind farm operators could be supported with detailed `what-if' scenario analyses. Secondly, model-based monitoring of SCADA temperatures is investigated. This approach tries to identify hidden changes in the load-dependent fluctuations of drivetrain temperatures that can potentially reveal increased degradation and possible imminent failure. A detailed comparison of machine learning regression techniques and model configurations is conducted based on data from four wind farms with varying properties. The results indicate that the detailed setup of the model is very important while the selection of the modelling technique might be less relevant than expected. Ways to establish reliable failure detection are discussed and a condition index is developed based on an ensemble of different models and anomaly measures. However, the findings also highlight that better documentation of maintenance is required to further improve data-driven condition monitoring approaches. In the next part, the capabilities of operational data are explored in a study with data from both the SCADA system and a Condition Monitoring System (CMS) based on drivetrain vibrations. Analyses of signal similarity and data clusters reveal signal relationships and potential for synergistic effects of the different data sources. An application of machine learning techniques demonstrates that the alarms of the commercial CMS can be predicted in certain cases with SCADA data alone. Finally, the benefits of having wind turbines in farms are investigated in the context of condition monitoring. Several approaches are developed to improve failure detection based on operational statistics, CMS vibrations or SCADA temperatures. It is demonstrated that utilising comparisons with neighbouring turbines might be beneficial to get earlier and more reliable warnings of imminent failures. This work has been part of the Advanced Wind Energy Systems Operation and Maintenance Expertise (AWESOME) project, a European consortium with companies, universities and research centres in the wind energy sector from Spain, Italy, Germany, Denmark, Norway and UK. Parts of this work were developed in collaboration with other fellows in the project (as marked and explained in footnotes)

    A fault detection strategy for software projects

    Get PDF
    Postojeći modeli predviđanja pogrešaka softvera zahtijevaju metrike i podatke o pogreškama koji pripadaju prethodnim verzijama softvera ili sličnim projektima softvera. Međutim, postoje slučajevi kada prethodni podaci o pogreškama nisu prisutni, kao što je prelazak softverske tvrtke u novo projektno područje. U takvim situacijama, nadzorne metode učenja pomoću označavanja pogreške se ne mogu primijeniti, što dovodi do potrebe za novim tehnikama. Mi smo predložili strategiju predviđanja pogrešaka softvera uporabom razinske metode mjernih pragova za predviđanje sklonosti pogreškama neoznačenih programskih modula. Ova tehnika je eksperimentalno ocijenjena na NASA setovima podataka, KC2 i JM1. Neki postojeći pristupi primjenjuju nekoliko klasterskih tehnika kazetnog modula, proces popraćen fazom procjene. Ovu procjenu obavlja stručnjak za kvalitetu softvera, koji analizira svakog predstavnika pojedinog klastera, a zatim označava module kao pogreški-naklonjene ili pogreški-nenaklonjene. Naš pristup ne zahtijeva čovjeka kao stručnjaka tijekom predviđanja procesa. To je strategija predviđanja pogreške, koja kombinira razinsku metodu mjernih pragova kao mehanizma za filtriranje i ILI operatora kao sastavni mehanizam.The existing software fault prediction models require metrics and fault data belonging to previous software versions or similar software projects. However, there are cases when previous fault data are not present, such as a software company’s transition to a new project domain. In this kind of situations, supervised learning methods using fault labels cannot be applied, leading to the need for new techniques. We proposed a software fault prediction strategy using method-level metrics thresholds to predict the fault-proneness of unlabelled program modules. This technique was experimentally evaluated on NASA datasets, KC2 and JM1. Some existing approaches implement several clustering techniques to cluster modules, process followed by an evaluation phase. This evaluation is performed by a software quality expert, who analyses every representative of each cluster and then labels the modules as fault-prone or not fault-prone. Our approach does not require a human expert during the prediction process. It is a fault prediction strategy, which combines a method-level metrics thresholds as filtering mechanism and an OR operator as a composition mechanism
    corecore