214,141 research outputs found

    Multiversion software reliability through fault-avoidance and fault-tolerance

    Get PDF
    In this project we have proposed to investigate a number of experimental and theoretical issues associated with the practical use of multi-version software in providing dependable software through fault-avoidance and fault-elimination, as well as run-time tolerance of software faults. In the period reported here we have working on the following: We have continued collection of data on the relationships between software faults and reliability, and the coverage provided by the testing process as measured by different metrics (including data flow metrics). We continued work on software reliability estimation methods based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. We have continued studying back-to-back testing as an efficient mechanism for removal of uncorrelated faults, and common-cause faults of variable span. We have also been studying back-to-back testing as a tool for improvement of the software change process, including regression testing. We continued investigating existing, and worked on formulation of new fault-tolerance models. In particular, we have partly finished evaluation of Consensus Voting in the presence of correlated failures, and are in the process of finishing evaluation of Consensus Recovery Block (CRB) under failure correlation. We find both approaches far superior to commonly employed fixed agreement number voting (usually majority voting). We have also finished a cost analysis of the CRB approach

    Software Reliability Prediction using Correlation Constrained Multi-Objective Evolutionary Optimization Algorithm

    Get PDF
    Software reliability frameworks are extremely effective for estimating the probability of software failure over time. Numerous approaches for predicting software dependability were presented, but neither of those has shown to be effective. Predicting the number of software faults throughout the research and testing phases is a serious problem. As there are several software metrics such as object-oriented design metrics, public and private attributes, methods, previous bug metrics, and software change metrics. Many researchers have identified and performed predictions of software reliability on these metrics. But none of them contributed to identifying relations among these metrics and exploring the most optimal metrics. Therefore, this paper proposed a correlation- constrained multi-objective evolutionary optimization algorithm (CCMOEO) for software reliability prediction. CCMOEO is an effective optimization approach for estimating the variables of popular growth models which consists of reliability. To obtain the highest classification effectiveness, the suggested CCMOEO approach overcomes modeling uncertainties by integrating various metrics with multiple objective functions. The hypothesized models were formulated using evaluation results on five distinct datasets in this research. The prediction was evaluated on seven different machine learning algorithms i.e., linear support vector machine (LSVM), radial support vector machine (RSVM), decision tree, random forest, gradient boosting, k-nearest neighbor, and linear regression. The result analysis shows that random forest achieved better performance

    Simplified approach for the reliability estimation of large transmission and sub-transmission systems

    Get PDF
    Includes bibliographical references.Various specialised power system reliability modelling software are commercially available to analyse the expected performance of a utility’s transmission and sub-transmission network. The software requires a physical network model to be constructed, representing all network components. A high level of accuracy is obtained using such software, but significant effort is required to create these models, especially when large utility-scale networks are modelled. Another limitation of the software is that specific design strategies can only be modelled by physically changing the network model, which again requires significant effort. A simplified approach is therefore required to enable utility engineers to analyse the reliability of different network configurations, reliability improvement strategies and planning criteria. The aim of this research is to provide a simplified reliability approach that will assist engineers in managing the reliability of their transmission and sub-transmission networks. The approach should be simplified to require minimum user inputs and it should be capable of quantifying the impact of different substation and line configurations on a system level. It is not expected that this approach will have the same level of accuracy as the detailed software models, but it should enable engineers to calculate system indices with much less effort, while still maintaining an acceptable level of accuracy. The scope of this research is limited to the transmission and sub-transmission networks (lines and substations). Power stations and MV distribution feeders are excluded from the analysis. Only technical, customer-based performance indicators are modelled, no load-based or economic performance indicators are calculated. An analytical approach is considered for the simplified reliability modelling, starting with a failure mode and effect analysis. The contribution of substation and sub-transmission events is decoupled and a detailed model of the substation is created, including all internal components. A reliability analysis is performed for each substation, to determine the unavailability experienced by customers connected to each busbar. An equivalent system model is then generated by replacing all substations with busbars, of which the outage frequency and outage duration are equal to that of the substation equivalent. The simplified substation reliability estimation is compared with detailed substation modelling using specialised software. The results obtained with the simplified reliability estimation show a good correlation with the detailed software models. The simplified reliability methodology was programmed into MS Excel and used to model the expected availability of the Ghana transmission network. Different scenarios were then modelled, analysing the impact of design and operational changes on the expected reliability of the network. The simplified reliability model developed through this research is capable of calculating system level technical performance indices for utility-scale networks, requiring much less effort than detailed software models, but still providing an acceptable level of accuracy. The technical system indices (SAIDI and SAIFI), calculated by means of the simplified reliability approach, provide an indication of the technical performance of the network, but they do not provide information on the economic impact of network outages. These technical indices have the potential to result in funding decisions that are not closely linked to economic interest. For this purpose economic indices are required, and it is recommended that the approach be extended to include the calculation of economic indices

    Design diversity: an update from research on reliability modelling

    Get PDF
    Diversity between redundant subsystems is, in various forms, a common design approach for improving system dependability. Its value in the case of software-based systems is still controversial. This paper gives an overview of reliability modelling work we carried out in recent projects on design diversity, presented in the context of previous knowledge and practice. These results provide additional insight for decisions in applying diversity and in assessing diverseredundant systems. A general observation is that, just as diversity is a very general design approach, the models of diversity can help conceptual understanding of a range of different situations. We summarise results in the general modelling of common-mode failure, in inference from observed failure data, and in decision-making for diversity in development.

    Choosing effective methods for design diversity - How to progress from intuition to science

    Get PDF
    Design diversity is a popular defence against design faults in safety critical systems. Design diversity is at times pursued by simply isolating the development teams of the different versions, but it is presumably better to "force" diversity, by appropriate prescriptions to the teams. There are many ways of forcing diversity. Yet, managers who have to choose a cost-effective combination of these have little guidance except their own intuition. We argue the need for more scientifically based recommendations, and outline the problems with producing them. We focus on what we think is the standard basis for most recommendations: the belief that, in order to produce failure diversity among versions, project decisions should aim at causing "diversity" among the faults in the versions. We attempt to clarify what these beliefs mean, in which cases they may be justified and how they can be checked or disproved experimentally

    What is the Connection Between Issues, Bugs, and Enhancements? (Lessons Learned from 800+ Software Projects)

    Full text link
    Agile teams juggle multiple tasks so professionals are often assigned to multiple projects, especially in service organizations that monitor and maintain a large suite of software for a large user base. If we could predict changes in project conditions changes, then managers could better adjust the staff allocated to those projects.This paper builds such a predictor using data from 832 open source and proprietary applications. Using a time series analysis of the last 4 months of issues, we can forecast how many bug reports and enhancement requests will be generated next month. The forecasts made in this way only require a frequency count of this issue reports (and do not require an historical record of bugs found in the project). That is, this kind of predictive model is very easy to deploy within a project. We hence strongly recommend this method for forecasting future issues, enhancements, and bugs in a project.Comment: Accepted to 2018 International Conference on Software Engineering, at the software engineering in practice track. 10 pages, 10 figure

    The impact of diversity upon common mode failures

    Get PDF
    Recent models for the failure behaviour of systems involving redundancy and diversity have shown that common mode failures can be accounted for in terms of the variability of the failure probability of components over operational environments. Whenever such variability is present, we can expect that the overall system reliability will be less than we could have expected if the components could have been assumed to fail independently. We generalise a model of hardware redundancy due to Hughes [Hughes 1987], and show that with forced diversity, this unwelcome result no longer applies: in fact it becomes theoretically possible to do better than would be the case under independence of failures
    • …
    corecore