897 research outputs found

    Hybrid Software Reliability Model for Big Fault Data and Selection of Best Optimizer Using an Estimation Accuracy Function

    Get PDF
    Software reliability analysis has come to the forefront of academia as software applications have grown in size and complexity. Traditionally, methods have focused on minimizing coding errors to guarantee analytic tractability. This causes the estimations to be overly optimistic when using these models. However, it is important to take into account non-software factors, such as human error and hardware failure, in addition to software faults to get reliable estimations. In this research, we examine how big data systems' peculiarities and the need for specialized hardware led to the creation of a hybrid model. We used statistical and soft computing approaches to determine values for the model's parameters, and we explored five criteria values in an effort to identify the most useful method of parameter evaluation for big data systems. For this purpose, we conduct a case study analysis of software failure data from four actual projects. In order to do a comparison, we used the precision of the estimation function for the results. Particle swarm optimization was shown to be the most effective optimization method for the hybrid model constructed with the use of large-scale fault data

    Dynamic learning with neural networks and support vector machines

    Get PDF
    Neural network approach has proven to be a universal approximator for nonlinear continuous functions with an arbitrary accuracy. It has been found to be very successful for various learning and prediction tasks. However, supervised learning using neural networks has some limitations because of the black box nature of their solutions, experimental network parameter selection, danger of overfitting, and convergence to local minima instead of global minima. In certain applications, the fixed neural network structures do not address the effect on the performance of prediction as the number of available data increases. Three new approaches are proposed with respect to these limitations of supervised learning using neural networks in order to improve the prediction accuracy.;Dynamic learning model using evolutionary connectionist approach . In certain applications, the number of available data increases over time. The optimization process determines the number of the input neurons and the number of neurons in the hidden layer. The corresponding globally optimized neural network structure will be iteratively and dynamically reconfigured and updated as new data arrives to improve the prediction accuracy. Improving generalization capability using recurrent neural network and Bayesian regularization. Recurrent neural network has the inherent capability of developing an internal memory, which may naturally extend beyond the externally provided lag spaces. Moreover, by adding a penalty term of sum of connection weights, Bayesian regularization approach is applied to the network training scheme to improve the generalization performance and lower the susceptibility of overfitting. Adaptive prediction model using support vector machines . The learning process of support vector machines is focused on minimizing an upper bound of the generalization error that includes the sum of the empirical training error and a regularized confidence interval, which eventually results in better generalization performance. Further, this learning process is iteratively and dynamically updated after every occurrence of new data in order to capture the most current feature hidden inside the data sequence.;All the proposed approaches have been successfully applied and validated on applications related to software reliability prediction and electric power load forecasting. Quantitative results show that the proposed approaches achieve better prediction accuracy compared to existing approaches

    Analysis of an inflection s-shaped software reliability model considering log-logistic testing-effort and imperfect debugging

    Get PDF
    Gokhale and Trivedi (1998) have proposed the Log-logistic software reliability growth model that can capture the increasing/decreasing nature of the failure occurrence rate per fault. In this paper, we will first show that a Log-logistic testing-effort function (TEF) can be expressed as a software development/testing-effort expenditure curve. We investigate how to incorporate the Log-logistic TEF into inflection S-shaped software reliability growth models based on non-homogeneous Poisson process (NHPP). The models parameters are estimated by least square estimation (LSE) and maximum likelihood estimation (MLE) methods. The methods of data analysis and comparison criteria are presented. The experimental results from actual data applications show good fit. A comparative analysis to evaluate the effectiveness for the proposed model and other existing models are also performed. Results show that the proposed models can give fairly better predictions. Therefore, the Log-logistic TEF is suitable for incorporating into inflection S-shaped NHPP growth models. In addition, the proposed models are discussed under imperfect debugging environment

    Reliability improvement and assessment of safety critical software

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (leaves 95-101).In order to allow the introduction of safety-related Digital Instrumentation and Control (DI&C) systems in nuclear power plants, the software used by the systems must be demonstrated to be highly reliable. The most widely used and most powerful method for ensuring high software quality and reliability is testing. An integrated methodology is developed in this thesis for reliability assessment and improvement of safety critical software through testing. The methodology is based upon input domain-based reliability modeling and structural testing method. The purpose of the methodology is twofold: Firstly it can be used to control the testing process. The methodology provides path selection criteria and stopping criteria for the testing process with the aim to achieve maximum reliability improvement using available testing resources. Secondly, it can be used to assess and quantify the reliability of the software after the testing process. The methodology provides a systematic mechanism to quantify the reliability and estimate uncertainty of the software after testing.by Yu Sui.S.M

    Identificación y detección de fallas en accionamiento utilizando NN-NARX

    Get PDF
    In this paper, the use of a Nonlinear Auto Regressive eXogenous Neural Networks model or NN-NARX for identification and fault detection in the actuator of an industrial thermal process is presented. Initially, the techniques of fault detection and diagnosis are exposed; then, emphasis is placed on the models of Artificial Neural Networks for identification and fault detection. Subsequently, the control system of a thermal process used as a case study is described. A monitoring system allows data recording under normal operation conditions for identification using the NN-NARX model. The model is used for residual online generation due to faults that are introduced randomly. Finally, the results of residual generation and evaluation are presented. The designed system is useful for implementation through a hardware device that can be incorporated into the process equipment and support the operator in the presence of failures.En este artículo se presenta la utilización de un modelo de Red Neuronal no lineal Auto Regresivo de Variable Exógena o NN-NARX (por sus siglas en inglés), para la identificación y detección de fallas en un accionamiento de un proceso térmico industrial. Inicialmente, se exponen las técnicas de detección y diagnóstico de fallas; luego, se hace énfasis en los modelos de Redes Neuronales Artificiales para identificación y detección de fallas. Posteriormente, se describe el sistema de control de un proceso térmico utilizado como caso de estudio. Un sistema de monitorización permite el registro de datos en condiciones normales de operación para la identificación usando el modelo NN-NARX. El modelo es utilizado para la generación en línea de residuos ante fallas que son introducidas aleatoriamente. Finalmente, se presentan los resultados de la generación y evaluación de residuos. El sistema diseñado es útil para la implementación a través de un dispositivo hardware que puede incorporarse en el equipo del proceso y apoyar al operador ante la presencia de fallas

    Fault detection and correction modeling of software systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Fishing for Errors in an Ocean Rather than a Pond

    Full text link
    [EN] In the internet age, a proliferation of services appear on the web. Errors in using the internet service or app are dynamically introduced as new devices/interfaces/software are produced and are found to be incompatible with an app that is perfectly good for other devices. The number of users who can detect various errors changes dynamically: for instance, there may be new adopters of the software over time. It may also happen that an old user might upgrade and thus run into new incompatibility errors. Allowing new users and errors to enter dynamically poses considerable modeling and estimation difficulties. In the era of Big Data, methods for dynamically updating as new observations arise are important. Traditional models for detecting errors have generally assumed a finite number of errors. We provide a general model that allows for a procedure for finding maximum likelihood estimators of key parameters where the number of errors and the number of users can change.Wilson, J.; Te'eni, D. (2018). Fishing for Errors in an Ocean Rather than a Pond. En 2nd International Conference on Advanced Reserach Methods and Analytics (CARMA 2018). Editorial Universitat Politècnica de València. 125-132. https://doi.org/10.4995/CARMA2018.2018.8331OCS12513

    VERDICTS: Visual Exploratory Requirements Discovery and Injection for Comprehension and Testing of Software

    Get PDF
    We introduce a methodology and research tools for visual exploratory software analysis. VERDICTS combines exploratory testing, tracing, visualization, dynamic discovery and injection of requirements specifications into a live quick-feedback cycle, without recompilation or restart of the system under test. This supports discovery and verification of software dynamic behavior, software comprehension, testing, and locating the defect origin. At its core, VERDICTS allows dynamic evolution and testing of hypotheses about requirements and behavior, by using contracts as automated component verifiers. We introduce Semantic Mutation Testing as an approach to evaluate concordance of automated verifiers and the functional specifications they represent with respect to existing implementation. Mutation testing has promise, but also has many known issues. In our tests, both black-box and white-box variants of our Semantic Mutation Testing approach performed better than traditional mutation testing as a measure of quality of automated verifiers
    corecore