9 research outputs found

    Using Bayesian optimization algorithm for model-based integration testing

    Get PDF

    Geometric Approaches to Statistical Defect Prediction and Learning

    Get PDF
    Software quality is directly correlated with the number of defects in software systems. As the complexity of software increases, manual inspection of software becomes prohibitively expensive. Thus, defect prediction is of paramount importance to project managers in allocating the limited resources effectively as well as providing many advantages such as the accurate estimation of project costs and schedules. This thesis addresses the issues of defect prediction and learning in the geometric framework using statistical quality control and genetic algorithms. A software defect prediction model using the geometric concept of operating characteristic curves is proposed. The main idea behind this predictor is to use geometric insight in helping construct an efficient prediction method to reliably predict the cumulative number of defects during the software development process. The performance of the proposed approach is validated on real data from actual software projects, and the experimental results demonstrate a much improved performance of the proposed statistical method in predicting defects. In the same vein, two defect learning predictors based on evolutionary algorithms are also proposed. These predictors use genetic programming as feature constructor method. The first predictor constructs new features based primarily on the geometrical characteristics of the original data. Then, an independent classifier is applied and the performance of feature selection method is measured. The second predictor uses a built-in classifier which automatically gets tuned for the constructed features. Experimental results on a NASA static metric dataset demonstrate the feasibility of the proposed genetic programming based approaches

    Applications of Bayesian networks and Petri nets in safety, reliability, and risk assessments: A review

    Get PDF
    YesSystem safety, reliability and risk analysis are important tasks that are performed throughout the system lifecycle to ensure the dependability of safety-critical systems. Probabilistic risk assessment (PRA) approaches are comprehensive, structured and logical methods widely used for this purpose. PRA approaches include, but not limited to, Fault Tree Analysis (FTA), Failure Mode and Effects Analysis (FMEA), and Event Tree Analysis (ETA). Growing complexity of modern systems and their capability of behaving dynamically make it challenging for classical PRA techniques to analyse such systems accurately. For a comprehensive and accurate analysis of complex systems, different characteristics such as functional dependencies among components, temporal behaviour of systems, multiple failure modes/states for components/systems, and uncertainty in system behaviour and failure data are needed to be considered. Unfortunately, classical approaches are not capable of accounting for these aspects. Bayesian networks (BNs) have gained popularity in risk assessment applications due to their flexible structure and capability of incorporating most of the above mentioned aspects during analysis. Furthermore, BNs have the ability to perform diagnostic analysis. Petri Nets are another formal graphical and mathematical tool capable of modelling and analysing dynamic behaviour of systems. They are also increasingly used for system safety, reliability and risk evaluation. This paper presents a review of the applications of Bayesian networks and Petri nets in system safety, reliability and risk assessments. The review highlights the potential usefulness of the BN and PN based approaches over other classical approaches, and relative strengths and weaknesses in different practical application scenarios.This work was funded by the DEIS H2020 project (Grant Agreement 732242)

    Risk analysis of autonomous vehicle and its safety impact on mixed traffic stream

    Get PDF
    In 2016, more than 35,000 people died in traffic crashes, and human error was the reason for 94% of these deaths. Researchers and automobile companies are testing autonomous vehicles in mixed traffic streams to eliminate human error by removing the human driver behind the steering wheel. However, recent autonomous vehicle crashes while testing indicate the necessity for a more thorough risk analysis. The objectives of this study were (1) to perform a risk analysis of autonomous vehicles and (2) to evaluate the safety impact of these vehicles in a mixed traffic stream. The overall research was divided into two phases: (1) risk analysis and (2) simulation of autonomous vehicles. Risk analysis of autonomous vehicles was conducted using the fault tree method. Based on failure probabilities of system components, two fault tree models were developed and combined to predict overall system reliability. It was found that an autonomous vehicle system could fail 158 times per one-million miles of travel due to either malfunction in vehicular components or disruption from infrastructure components. The second phase of this research was the simulation of an autonomous vehicle, where change in crash frequency after autonomous vehicle deployment in a mixed traffic stream was assessed. It was found that average travel time could be reduced by about 50%, and 74% of conflicts, i.e., traffic crashes, could be avoided by replacing 90% of the human drivers with autonomous vehicles

    Statistical modeling of product purchases and new measures for brand loyalty

    Get PDF
    Master'sMASTER OF SCIENC

    A Bayesian Network Approach for Product Safety Risk Management

    Get PDF
    A new method for safety risk management and assessment using Bayesian networks is proposed to resolve limitations of existing methods and to ensure that products and systems available on the market are acceptably safe for use. The method is applicable to a wide range of products and systems, ranging from consumer goods through to medical devices, and even complex systems such as aircraft. While methods such as Fault Tree Analysis (FTA) and Failure Mode and Effects Analysis (FMEA) have been used quite effectively in safety assessment for certain classes of critical systems, they have several limitations which are addressed by the proposed Bayesian network (BN) method. In particular, the BN approach enables us to combine multiple sources of knowledge and data to provide quantified, auditable risk estimates at all stages of a product’s life cycle, including especially when there are limited or no testing or operational safety data available. The BN approach also enables us to incorporate different perceptions of risk, including taking account of personal differences in the perceived benefits of the product under assessment. The proposed BN approach provides a means for safety regulators, manufacturers, risk professionals, and even individuals to better assess safety and risk. It is powerful and flexible, can complement traditional safety and risk assessment methods, and is applicable to a far greater range of products and systems. The method can also be used to validate the results of traditional safety and risk assessment methods when relevant data become available. It is demonstrated and validated using case studies from consumer product safety risk assessment and medical device risk management

    Bayesian statistical models for predicting software development effort

    No full text
    Constructing an accurate effort prediction model is a challenge in Software Engineering. This paper presents new Bayesian statistical models, in order to predict development effort of software systems in the International Software Benchmarking Standards Group (ISBSG) dataset. The first model is a Bayesian linear regression (BR) model and the second model is a Bayesian multivariate normal distribution (BMVN) model. Both models are calibrated using subsets randomly sampled from the dataset. The models’ predictive accuracy is evaluated using other subsets, which consist of only the cases unknown to the models. The predictive accuracy is measured in terms of the absolute residuals and magnitude of relative error. They are compared with the corresponding linear regression models. The results show that the Bayesian models have predictive accuracy equivalent to the linear regression models, in general. However, the advantage of the Bayesian statistical models is that they do not require a calibration subset as large as the regression counterpart. In the case of the ISBSG dataset it is confirmed that the predictive accuracy of the Bayesian statistical models, in particular the BMVN model is significantly better than the linear regression model, when the calibration subset consists of only five or smaller number of software systems. This finding justifies the use of Bayesian statistical models in software effort prediction, in particular, when the system of interest has only a very small amount of historical data.UnpublishedC.G. Bai. Bayesian network based software reliability prediction with an operational profile. The Journal of Systems and Software, 77:103–112, 2005. C.G. Bai, Q.P. Hu, M. Xie, and S.H. Ng. Software failure prediction based on a Markov bayesian network model. The Journal of Systems and Software, 74:275–282, 2005. J. Baik, B. Boehm, and B.M. Steece. Disaggregating and calibrating the CASE tool variable in COCOMO II. IEEE Transactions on Software Engineering, 28(11):1009–1022, 2002. S. Chulani, B. Boehm, and B.M. Steece. Bayesian analysis of empirical software engineering cost models. IEEE Transactions on Software Engineering, 25(4):513–583, 1999. P. Congdon. Bayesian Statistical Modelling. John Wiley & Sons., 2001. S.D. Conte, H.E. Dunsmore, and V.Y. Shen. Software Engineering Metrics and Models. Benjamin/Cummings Publishing Company, 1986. C. Fan and Y. Yu. BBN-based software project risk management. The Journal of Systems and Software, 73:193–203, 2004. N. Fenton and M. Neil. A critique of software defect prediction models. IEEE Transactions on Software Engineering, 25(5):675–689, 1999. N.E. Fenton and S.L. Pfleeger. Software Metrics:A Rigorous & Practical Approach. PWS Publishing Company, second edition, 1997. T. Foss, E. Stensrud, B. Kitchenham, and I. Myrtveit. A simulation study of the model evaluation criterion mmre. IEEE Transactions on Software Engineering, 29(11):985–995, 2003. P.J. Green. A primer on markov chain monte carlo. In O.E. Barndorff-Nielsen, D.R. Cox, and C. Klüppelberg, editors, Complex Stochastic Systems, chapter 1, pages 1–62. Chapman & Hall/CRC, 2001. F.V. Jensen. Bayesian Networks and Decision Graphs. Springer–Verlag New York, 2001. B.A. Kitchenham, L.M. Pickard, S.G. MacDonell, and M.J. Shepperd. What accuracy statistics really measure. IEE Proceedings–Software, 148(3):81–85, 2001. S.G. MacDonell. Establishing relationships between specification size and software process effort in case environment. Information and Software Technology, 39:35–45, 1997. J. Moses. Bayesian probability distributions for assessing measurement of subjective software attributes. Information and Software Technology, 42:533–546, 2000. M. Neil, N. Fenton, and L. Nielsen. Building large-scale bayesian networks. The Knowledge Engineering Review, 15(3):257–284, 2000. P.C. Pendharkar, G.H. Subramanian, and J.A. Rodger. A probabilistic model for predicting software development effort. IEEE Transactions on Software Engineering, 31(7):615–624, 2005. I. Stamelos, L. Angelis, P. Dimou, and E. Sakellaris. On the use of Bayesian belief networks for the prediction of software productivity. Information and Software Technology, 45:51–60, 2003. E. Stensrud, T. Foss, B.A. Kitchenham, and I. Myrtveit. An empirical validation of the relationship between the magnitude of relative error and project size. In Proceedings of the 8th IEEE Symposium on Software Metrics (METRICS’02), pages 3–12, 2002. B. Stewart. Predicting project delivery rates using the Naive–Bayes classifier. Journal of Software Maintenance and Evolution: Research and Practice, 14:161–179, 2002. C. van Koten and A.R. Gray. An application of bayesian network for predicting object-oriented software maintainability. Information and Software Technology, in press, 2005. D.A. Wooff, M. Goldstein, and F.P.A. Coolen. Bayesian graphical models for software testing. IEEE Transactions on Software Engineering, 28(5):510–525, 2002

    Bayesian statistical models for predicting software development effort

    Get PDF
    Constructing an accurate effort prediction model is a challenge in Software Engineering. This paper presents new Bayesian statistical models, in order to predict development effort of software systems in the International Software Benchmarking Standards Group (ISBSG) dataset. The first model is a Bayesian linear regression (BR) model and the second model is a Bayesian multivariate normal distribution (BMVN) model. Both models are calibrated using subsets randomly sampled from the dataset. The models’ predictive accuracy is evaluated using other subsets, which consist of only the cases unknown to the models. The predictive accuracy is measured in terms of the absolute residuals and magnitude of relative error. They are compared with the corresponding linear regression models. The results show that the Bayesian models have predictive accuracy equivalent to the linear regression models, in general. However, the advantage of the Bayesian statistical models is that they do not require a calibration subset as large as the regression counterpart. In the case of the ISBSG dataset it is confirmed that the predictive accuracy of the Bayesian statistical models, in particular the BMVN model is significantly better than the linear regression model, when the calibration subset consists of only five or smaller number of software systems. This finding justifies the use of Bayesian statistical models in software effort prediction, in particular, when the system of interest has only a very small amount of historical data.UnpublishedC.G. Bai. Bayesian network based software reliability prediction with an operational profile. The Journal of Systems and Software, 77:103–112, 2005. C.G. Bai, Q.P. Hu, M. Xie, and S.H. Ng. Software failure prediction based on a Markov bayesian network model. The Journal of Systems and Software, 74:275–282, 2005. J. Baik, B. Boehm, and B.M. Steece. Disaggregating and calibrating the CASE tool variable in COCOMO II. IEEE Transactions on Software Engineering, 28(11):1009–1022, 2002. S. Chulani, B. Boehm, and B.M. Steece. Bayesian analysis of empirical software engineering cost models. IEEE Transactions on Software Engineering, 25(4):513–583, 1999. P. Congdon. Bayesian Statistical Modelling. John Wiley & Sons., 2001. S.D. Conte, H.E. Dunsmore, and V.Y. Shen. Software Engineering Metrics and Models. Benjamin/Cummings Publishing Company, 1986. C. Fan and Y. Yu. BBN-based software project risk management. The Journal of Systems and Software, 73:193–203, 2004. N. Fenton and M. Neil. A critique of software defect prediction models. IEEE Transactions on Software Engineering, 25(5):675–689, 1999. N.E. Fenton and S.L. Pfleeger. Software Metrics:A Rigorous & Practical Approach. PWS Publishing Company, second edition, 1997. T. Foss, E. Stensrud, B. Kitchenham, and I. Myrtveit. A simulation study of the model evaluation criterion mmre. IEEE Transactions on Software Engineering, 29(11):985–995, 2003. P.J. Green. A primer on markov chain monte carlo. In O.E. Barndorff-Nielsen, D.R. Cox, and C. Klüppelberg, editors, Complex Stochastic Systems, chapter 1, pages 1–62. Chapman & Hall/CRC, 2001. F.V. Jensen. Bayesian Networks and Decision Graphs. Springer–Verlag New York, 2001. B.A. Kitchenham, L.M. Pickard, S.G. MacDonell, and M.J. Shepperd. What accuracy statistics really measure. IEE Proceedings–Software, 148(3):81–85, 2001. S.G. MacDonell. Establishing relationships between specification size and software process effort in case environment. Information and Software Technology, 39:35–45, 1997. J. Moses. Bayesian probability distributions for assessing measurement of subjective software attributes. Information and Software Technology, 42:533–546, 2000. M. Neil, N. Fenton, and L. Nielsen. Building large-scale bayesian networks. The Knowledge Engineering Review, 15(3):257–284, 2000. P.C. Pendharkar, G.H. Subramanian, and J.A. Rodger. A probabilistic model for predicting software development effort. IEEE Transactions on Software Engineering, 31(7):615–624, 2005. I. Stamelos, L. Angelis, P. Dimou, and E. Sakellaris. On the use of Bayesian belief networks for the prediction of software productivity. Information and Software Technology, 45:51–60, 2003. E. Stensrud, T. Foss, B.A. Kitchenham, and I. Myrtveit. An empirical validation of the relationship between the magnitude of relative error and project size. In Proceedings of the 8th IEEE Symposium on Software Metrics (METRICS’02), pages 3–12, 2002. B. Stewart. Predicting project delivery rates using the Naive–Bayes classifier. Journal of Software Maintenance and Evolution: Research and Practice, 14:161–179, 2002. C. van Koten and A.R. Gray. An application of bayesian network for predicting object-oriented software maintainability. Information and Software Technology, in press, 2005. D.A. Wooff, M. Goldstein, and F.P.A. Coolen. Bayesian graphical models for software testing. IEEE Transactions on Software Engineering, 28(5):510–525, 2002
    corecore