7,428 research outputs found

    Enhance Rule Based Detection for Software Fault Prone Modules

    Get PDF
    Software quality assurance is necessary to increase the level of confidence in the developed software and reduce the overall cost for developing software projects. The problem addressed in this research is the prediction of fault prone modules using data mining techniques. Predicting fault prone modules allows the software managers to allocate more testing and resources to such modules. This can also imply a good investment in better design in future systems to avoid building error prone modules. Software quality models that are based upon data mining from previous projects can identify fault-prone modules in the current similar development project, once similarity between projects is established. In this paper, we applied different data mining rule-based classification techniques on several publicly available datasets of the NASA software repository (e.g. PC1, PC2, etc). The goal was to classify the software modules into either fault prone or not fault prone modules. The paper proposed a modification on the RIDOR algorithm on which the results show that the enhanced RIDOR algorithm is better than other classification techniques in terms of the number of extracted rules and accuracy. The implemented algorithm learns defect prediction using mining static code attributes. Those attributes are then used to present a new defect predictor with high accuracy and low error rate

    Run-time risk management in adaptive ICT systems

    No full text
    We will present results of the SERSCIS project related to risk management and mitigation strategies in adaptive multi-stakeholder ICT systems. The SERSCIS approach involves using semantic threat models to support automated design-time threat identification and mitigation analysis. The focus of this paper is the use of these models at run-time for automated threat detection and diagnosis. This is based on a combination of semantic reasoning and Bayesian inference applied to run-time system monitoring data. The resulting dynamic risk management approach is compared to a conventional ISO 27000 type approach, and validation test results presented from an Airport Collaborative Decision Making (A-CDM) scenario involving data exchange between multiple airport service providers

    Predicting Fault-prone Software Module Using Data Mining Technique and Fuzzy Logic

    Get PDF
    This paper discusses a new model towards reliability and quality improvement of software systems by predicting fault-prone module before testing. Model utilizes the classification capability of data mining techniques and knowledge stored in software metrics to classify the software module as fault-prone or not fault-prone. A decision tree is constructed using ID3 algorithm for existing project data in order to gain information for the purpose of decision making whether a particular module id fault-prone or not. The gained information is converted into fuzzy rules and integrated with fuzzy inference system to predict fault-prone or not fault-prone software module for target data. The model is also able to predict fault-proneness degree of faulty module. The goal is to help software manager to concentrate their testing efforts to fault-prone modules in order to improve the reliability and quality of the software system. We used NASA projects data set from the PROMOSE repository to validate the predictive accuracy of the model

    Combined fault detection and classification of internal combustion engine using neural network

    Get PDF
    Different faults in internal combustion engines leads to excessive fuel consumption, pollution, acoustic emission and wear of engine components. Detection of fault is also difficult for maintenance technicians due to broad range of faults and combination of the faults. In this research the faults due to malfunction of manifold absolute pressure, knock sensor and misfire are detected and classified by analyzing vibration signals. The vibration signals acquired from engine block were preprocessed by wavelet analysis, and signal energy is considered as a distinguishing property to classify these faults by a Multi-Layer Perceptron Neural Network (MLPNN). The designed MLPNN can classify these faults with almost 100 % efficiency

    Predictive Maintenance Support System in Industry 4.0 Scenario

    Get PDF
    The fourth industrial revolution that is being witnessed nowadays, also known as Industry 4.0, is heavily related to the digitization of manufacturing systems and the integration of different technologies to optimize manufacturing. By combining data acquisition using specific sensors and machine learning algorithms to analyze this data and predict a failure before it happens, Predictive Maintenance is a critical tool to implement towards reducing downtime due to unpredicted stoppages caused by malfunctions. Based on the reality of Commercial Specialty Tires factory at Continental Mabor - IndĂşstria de Pneus, S.A., the present work describes several problems faced regarding equipment maintenance. Taking advantage of the information gathered from studying the processes incorporated in the factory, it is designed a solution model for applying predictive maintenance in these processes. The model is divided into two primary layers, hardware, and software. Concerning hardware, sensors and respective applications are delineated. In terms of software, techniques of data analysis namely machine learning algorithms are described so that the collected data is studied to detect possible failures

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers

    Empirical analysis of software reliability

    Get PDF
    This thesis presents an empirical study of architecture-based software reliability based on large real case studies. It undoubtedly demonstrates the value of using open source software to empirically study software reliability. The major goal is to empirically analyze the applicability, adequacy and accuracy of architecture-based software reliability models. In both our studies we found evidence that the number of failures due to faults in more than one component is not insignificant. Consequently, existing models that make such simplifying assumptions must be improved to account for this phenomenon. This thesis\u27 contributions include developing automatic methods for efficient extraction of necessary data from the available repositories, and using this data to test how and when architecture-based software reliability models work. We study their limitations and ways to improve them. Our results show the importance of knowledge gained from the interaction between theoretical and empirical research

    Identifying Common Patterns and Unusual Dependencies in Faults, Failures and Fixes for Large-scale Safety-critical Software

    Get PDF
    As software evolves, becoming a more integral part of complex systems, modern society becomes more reliant on the proper functioning of such systems. However, the field of software quality assurance lacks detailed empirical studies from which best practices can be determined. The fundamental factors that contribute to software quality are faults, failures and fixes, and although some studies have considered specific aspects of each, comprehensive studies have been quite rare. Thus, the fact that we establish the cause-effect relationship between the fault(s) that caused individual failures, as well as the link to the fixes made to prevent the failures from (re)occurring appears to be a unique characteristic of our work. In particular, we analyze fault types, verification activities, severity levels, investigation effort, artifacts fixed, components fixed, and the effort required to implement fixes for a large industrial case study. The analysis includes descriptive statistics, statistical inference through formal hypothesis testing, and data mining. Some of the most interesting empirical results include (1) Contrary to popular belief, later life-cycle faults dominate as causes of failures. Furthermore, over 50% of high priority failures (e.g., post-release failures and safety-critical failures) were caused by coding faults. (2) 15% of failures led to fixes spread across multiple components and the spread was largely affected by the software architecture. (3) The amount of effort spent fixing faults associated with each failure was not uniformly distributed across failures; fixes with a greater spread across components and artifacts, required more effort. Overall, the work indicates that fault prevention and elimination efforts focused on later life cycle faults is essential as coding faults were the dominating cause of safety-critical failures and post-release failures. Further, statistical correlation and/or traditional data mining techniques show potential for assessment and prediction of the locations of fixes and the associated effort. By providing quantitative results and including statistical hypothesis testing, which is not yet a standard practice in software engineering, our work enriches the empirical knowledge needed to improve the state-of-the-art and practice in software quality assurance

    Localizing State-Dependent Faults Using Associated Sequence Mining

    Get PDF
    In this thesis we developed a new fault localization process to localize faults in object oriented software. The process is built upon the Encapsulation\u27\u27 principle and aims to locate state-dependent discrepancies in the software\u27s behavior. We experimented with the proposed process on 50 seeded faults in 8 subject programs, and were able to locate the faulty class in 100% of the cases when objects with constant states were taken into consideration, while we missed 24% percent of the faults when these objects were not considered. We also developed a customized data mining technique Associated sequence mining\u27\u27 to be used in the localization process; experiments showed that it only provided slight enhancement to the result of the process. The customization provided at least 17% enhancement in the time performance and it is generic enough to be applicable in other domains. In addition to that we have developed an extensive taxonomy for object-oriented software faults based on UML models. We used the taxonomy to make decisions regarding the localization process. It provides an aid for understanding the nature of software faults, and will help enhance the different tasks related to software quality assurance. The main contributions of the thesis were based on preliminary experimentation on the usability of the classification algorithms implemented in WEKA in software fault localization, which resulted in the conclusion that both the fault type and the mechanism implemented in the analysis algorithm were significant to affect the results of the localization
    • …
    corecore