80,515 research outputs found

    Software reliability prediction using neural network

    Get PDF
    Software engineering is incomplete without Software reliability prediction. For characterising any software product quality quantitatively during phase of testing, the most important factor is software reliability assessment. Many analytical models were being proposed over the years for assessing the reliability of a software system and for modeling the growth trends of software reliability with different capabilities of prediction at different testing phases. But it is needed for developing such a single model which can be applicable for a relatively better prediction in all conditions and situations. For this the Neural Network (NN) model approach is introduced. In this thesis report the applicability of the models based on NN for better reliability prediction in a real environment is described and a method of assessment of growth of software reliability using NN model is presented. Mainly two types of NNs are used here. One is feed forward neural network and another is recurrent neural network. For modeling both networks, back propagation learning algorithm is implemented and the related network architecture issues, data representation methods and some unreal assumptions associated with software reliability models are discussed. Different datasets containing software failures are applied to the proposed models. These datasets are obtained from several software projects. Then it is observed that the results obtained indicate a significant improvement in performance by using neural network models over conventional statistical models based on non homogeneous Poisson process

    Software quality and reliability prediction using Dempster -Shafer theory

    Get PDF
    As software systems are increasingly deployed in mission critical applications, accurate quality and reliability predictions are becoming a necessity. Most accurate prediction models require extensive testing effort, implying increased cost and slowing down the development life cycle. We developed two novel statistical models based on Dempster-Shafer theory, which provide accurate predictions from relatively small data sets of direct and indirect software reliability and quality predictors. The models are flexible enough to incorporate information generated throughout the development life-cycle to improve the prediction accuracy.;Our first contribution is an original algorithm for building Dempster-Shafer Belief Networks using prediction logic. This model has been applied to software quality prediction. We demonstrated that the prediction accuracy of Dempster-Shafer Belief Networks is higher than that achieved by logistic regression, discriminant analysis, random forests, as well as the algorithms in two machine learning software packages, See5 and WEKA. The difference in the performance of the Dempster-Shafer Belief Networks over the other methods is statistically significant.;Our second contribution is also based on a practical extension of Dempster-Shafer theory. The major limitation of the Dempsters rule and other known rules of evidence combination is the inability to handle information coming from correlated sources. Motivated by inherently high correlations between early life-cycle predictors of software reliability, we extended Murphy\u27s rule of combination to account for these correlations. When used as a part of the methodology that fuses various software reliability prediction systems, this rule provided more accurate predictions than previously reported methods. In addition, we proposed an algorithm, which defines the upper and lower bounds of the belief function of the combination results. To demonstrate its generality, we successfully applied it in the design of the Online Safety Monitor, which fuses multiple correlated time varying estimations of convergence of neural network learning in an intelligent flight control system

    Dynamic learning with neural networks and support vector machines

    Get PDF
    Neural network approach has proven to be a universal approximator for nonlinear continuous functions with an arbitrary accuracy. It has been found to be very successful for various learning and prediction tasks. However, supervised learning using neural networks has some limitations because of the black box nature of their solutions, experimental network parameter selection, danger of overfitting, and convergence to local minima instead of global minima. In certain applications, the fixed neural network structures do not address the effect on the performance of prediction as the number of available data increases. Three new approaches are proposed with respect to these limitations of supervised learning using neural networks in order to improve the prediction accuracy.;Dynamic learning model using evolutionary connectionist approach . In certain applications, the number of available data increases over time. The optimization process determines the number of the input neurons and the number of neurons in the hidden layer. The corresponding globally optimized neural network structure will be iteratively and dynamically reconfigured and updated as new data arrives to improve the prediction accuracy. Improving generalization capability using recurrent neural network and Bayesian regularization. Recurrent neural network has the inherent capability of developing an internal memory, which may naturally extend beyond the externally provided lag spaces. Moreover, by adding a penalty term of sum of connection weights, Bayesian regularization approach is applied to the network training scheme to improve the generalization performance and lower the susceptibility of overfitting. Adaptive prediction model using support vector machines . The learning process of support vector machines is focused on minimizing an upper bound of the generalization error that includes the sum of the empirical training error and a regularized confidence interval, which eventually results in better generalization performance. Further, this learning process is iteratively and dynamically updated after every occurrence of new data in order to capture the most current feature hidden inside the data sequence.;All the proposed approaches have been successfully applied and validated on applications related to software reliability prediction and electric power load forecasting. Quantitative results show that the proposed approaches achieve better prediction accuracy compared to existing approaches

    Прогнозування дефектів програмного забезпечення ансамблем нейронних мереж

    Get PDF
    Defect prediction is one of the key challenges in software development and programming language research for improving software quality and reliability. The problem in this area is to properly identify the defective source code with high accuracy. This study describes the process of improving the accuracy of defect prediction in software modules and components using an ensemble of artificial neural networks. The combined data set obtained from the open PROMISE Software Engineering repository was used as the input data set. The data set contained 15,123 records, each of which consisted of 21 code metrics and a target binary variable that indicated defects in this software module or project. This study describes the use and implementation of the four most common neural networks, namely Multilayer Perceptron, neural network based on Radial-Basis Functions, Recurrent Neural Network and Long Short-Term Memory for software defect prediction using Python programming language and Keras open-source library. The performance of the models significantly depends on the value of hyperparameters, so the GridSearch function was used to determine the optimal parameters of the architecture of each neural network. The best prediction accuracy was achieved with recurrent neural networks and Long Short-Term Memory neural networks. The use of a previously static software reliability model developed by the authors, which reduces the set of features to the seven most important, allows increasing the accuracy of prediction by these methods to 89.97 % (for RNN) and 91.16 % (in the case of LSTM). The study describes the integration of developed neural networks into a stacked ensemble, which as a meta-model (supervisor) uses logistic regression to improve prediction results. The use of such an ensemble allowed increasing the accuracy of software defect prediction to 93.97 % in the case of using the seven most important code metrics as features, and up to 81.84 % in the case of using the whole set of features.Прогнозування дефектів програмного забезпечення, зокрема крос-проєктне, є актуальною і важливою науково-прикладною задачею, вирішення якої спрямоване на підвищення якості та надійності програмних продуктів та зменшення вартості їх розроблення та супроводу. Перспективним підходом до розв'язання такої задач може бути використання штучних нейронних мереж, зокрема глибинного навчання та їх ансамблів. Ансамблювання часто може покращити точність прогнозування моделей і розпаралелити результуючу модель, що підвищує швидкість обчислень. У цьому дослідженні побудовано архітектуру глибинних нейронних мереж, яка володіє вищими показниками точності прогнозування дефектів програмного забезпечення порівняно із традиційними моделями машинного навчання. У ролі якості наборів вхідних даних використовували комбінований набір, отриманий з репозиторію PROMISE Software Engineering, який містить дані про тестування програмних модулів п'яти програм (КС1, КС2, PC1, CM1, JM1) та двадцять одну метрику коду. Для реалізації нейронних мереж використано мову програмування Python і відкритої нейромережної бібліотеки Keras. Автоматизоване налаштування гіперпараметрів нейронних мереж реалізовано за допомогою функції GridSearchCV. Розроблено модель прогнозування надійності ПЗ на основі методів глибинного навчання і показано, що підвищення точності прогнозування дефектів ПЗ до 93,97 % можна досягнути у спосіб відповідного вибору множини ознак (метрик програмного коду) з наступним використанням стекового ансамблю нейронних мереж, до якого входять багатошаровий перцептрон (MLP), нейронна мережа на основі радіально-базисних функцій (RBFNN), рекурентна нейронна мережа (RNN) та довга короткотермінова пам'ять (LSTM), а як метамодель використовують логістичну регресію. Реалізація стекового ансамблю нейронних мереж дає змогу в подальшому створити програмний засіб, який зможе допомагати при ідентифікації програмних компонент із найбільшою ймовірністю появи дефектів

    An estimate of necessary effort in the development of software projects

    Get PDF
    International Workshop on Intelligent Technologies for Software Engineering (WITSE'04). 19th IEEE International Conference on Automated Software Engineering (Linz, Austria, September 20th - 25th, 2004)The estimated of the effort in the development of software projects has already been studied in the field of software engineering. For this purpose different ways of measurement such as Unes of code and function points, generally addressed to relate software size with project cost (effort) have been used. In this work we are presenting a research project that deals with this field, us'mg machine learning techniques to predict the software project cost. Several public set of data are used. The analysed sets of data only relate the effort invested in the development of software projects and the size of the resultant code. For this reason, we can say that the data used are poor. Despite that, the results obtained are good, because they improve the ones obtained in previous analyses. In order to get results closer to reality we should find data sets of a bigger size that take into account more variables, thus offering more possibilities to obtain solutions in a more efficient way.Publicad

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems
    corecore