83,771 research outputs found

    Software Reliability Prediction using Fuzzy Min-Max Algorithm and Recurrent Neural Network Approach

    Get PDF
    Fuzzy Logic (FL) together with Recurrent Neural Network (RNN) is used to predict the software reliability. Fuzzy Min-Max algorithm is used to optimize the number of the kgaussian nodes in the hidden layer and delayed input neurons. The optimized recurrentneural network is used to dynamically reconfigure in real-time as actual software failure. In this work, an enhanced fuzzy min-max algorithm together with recurrent neural network based machine learning technique is explored and a comparative analysis is performed for the modeling of reliability prediction in software systems. The model has been applied on data sets collected across several standard software projects during system testing phase with fault removal. The performance of our proposed approach has been tested using distributed system application failure data set

    Software reliability prediction using neural network

    Get PDF
    Software engineering is incomplete without Software reliability prediction. For characterising any software product quality quantitatively during phase of testing, the most important factor is software reliability assessment. Many analytical models were being proposed over the years for assessing the reliability of a software system and for modeling the growth trends of software reliability with different capabilities of prediction at different testing phases. But it is needed for developing such a single model which can be applicable for a relatively better prediction in all conditions and situations. For this the Neural Network (NN) model approach is introduced. In this thesis report the applicability of the models based on NN for better reliability prediction in a real environment is described and a method of assessment of growth of software reliability using NN model is presented. Mainly two types of NNs are used here. One is feed forward neural network and another is recurrent neural network. For modeling both networks, back propagation learning algorithm is implemented and the related network architecture issues, data representation methods and some unreal assumptions associated with software reliability models are discussed. Different datasets containing software failures are applied to the proposed models. These datasets are obtained from several software projects. Then it is observed that the results obtained indicate a significant improvement in performance by using neural network models over conventional statistical models based on non homogeneous Poisson process

    Energy Management System Modeling of DC Data Center with Hybrid Energy Sources Using Neural Network

    Get PDF
    As data centers continue to grow rapidly, engineers will face the greater challenge in finding ways to minimize the cost of powering data centers while improving their reliability. The continuing growth of renewable energy sources such as photovoltaics (PV) system presents an opportunity to reduce the long-term energy cost of data centers and to enhance reliability when used with utility AC power and energy storage. However, the inter-temporal and the intermittency nature of solar energy makes it necessary for the proper coordination and management of these energy sources. This thesis proposes an energy management system in DC data center using a neural network to coordinate AC power, energy storage, and PV system that constitutes a reliable electrical power distribution to the data center. Software modeling of the DC data center was first developed for the proposed system followed by the construction of a lab-scale model to simulate the proposed system. Five scenarios were tested on the hardware model and the results demonstrate the effectiveness and accuracy of the neural network approach. Results further prove the feasibility in utilizing renewable energy source and energy storage in DC data centers. Analysis and performance of the proposed system will be discussed in this thesis, and future improvement for improved energy system reliability will also be presented

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems
    corecore