14,403 research outputs found

    Application of Stochastic Reliability Modeling to Waterfall and Feature Driven Development Software Development Lifecycles

    Get PDF
    There are many techniques for performing software reliability modeling. In the environment of software development some models use the stochastic nature of fault introduction and fault removal to predict reliability. This thesis research analyzes a stochastic approach to software reliability modeling and its performance on two distinct software development lifecycles. The derivation of the model is applied to each lifecycle. Contrasts between the lifecycles are shown. Actual data collected from industry projects illustrate the performance of the model to the lifecycle. Actual software development fault data is used in select phases of each lifecycle for comparisons with the model predicted fault data. Various enhancements to the model are presented and evaluated, including optimization of the parameters based on partial observations

    Issues about the Adoption of Formal Methods for Dependable Composition of Web Services

    Full text link
    Web Services provide interoperable mechanisms for describing, locating and invoking services over the Internet; composition further enables to build complex services out of simpler ones for complex B2B applications. While current studies on these topics are mostly focused - from the technical viewpoint - on standards and protocols, this paper investigates the adoption of formal methods, especially for composition. We logically classify and analyze three different (but interconnected) kinds of important issues towards this goal, namely foundations, verification and extensions. The aim of this work is to individuate the proper questions on the adoption of formal methods for dependable composition of Web Services, not necessarily to find the optimal answers. Nevertheless, we still try to propose some tentative answers based on our proposal for a composition calculus, which we hope can animate a proper discussion

    The safety case and the lessons learned for the reliability and maintainability case

    Get PDF
    This paper examine the safety case and the lessons learned for the reliability and maintainability case

    Towards operational measures of computer security

    Get PDF
    Ideally, a measure of the security of a system should capture quantitatively the intuitive notion of ‘the ability of the system to resist attack’. That is, it should be operational, reflecting the degree to which the system can be expected to remain free of security breaches under particular conditions of operation (including attack). Instead, current security levels at best merely reflect the extensiveness of safeguards introduced during the design and development of a system. Whilst we might expect a system developed to a higher level than another to exhibit ‘more secure behaviour’ in operation, this cannot be guaranteed; more particularly, we cannot infer what the actual security behaviour will be from knowledge of such a level. In the paper we discuss similarities between reliability and security with the intention of working towards measures of ‘operational security’ similar to those that we have for reliability of systems. Very informally, these measures could involve expressions such as the rate of occurrence of security breaches (cf rate of occurrence of failures in reliability), or the probability that a specified ‘mission’ can be accomplished without a security breach (cf reliability function). This new approach is based on the analogy between system failure and security breach. A number of other analogies to support this view are introduced. We examine this duality critically, and have identified a number of important open questions that need to be answered before this quantitative approach can be taken further. The work described here is therefore somewhat tentative, and one of our major intentions is to invite discussion about the plausibility and feasibility of this new approach

    Review of recent research towards power cable life cycle management

    Get PDF
    Power cables are integral to modern urban power transmission and distribution systems. For power cable asset managers worldwide, a major challenge is how to manage effectively the expensive and vast network of cables, many of which are approaching, or have past, their design life. This study provides an in-depth review of recent research and development in cable failure analysis, condition monitoring and diagnosis, life assessment methods, fault location, and optimisation of maintenance and replacement strategies. These topics are essential to cable life cycle management (LCM), which aims to maximise the operational value of cable assets and is now being implemented in many power utility companies. The review expands on material presented at the 2015 JiCable conference and incorporates other recent publications. The review concludes that the full potential of cable condition monitoring, condition and life assessment has not fully realised. It is proposed that a combination of physics-based life modelling and statistical approaches, giving consideration to practical condition monitoring results and insulation response to in-service stress factors and short term stresses, such as water ingress, mechanical damage and imperfections left from manufacturing and installation processes, will be key to success in improved LCM of the vast amount of cable assets around the world

    Confidence intervals for reliability growth models with small sample sizes

    Get PDF
    Fully Bayesian approaches to analysis can be overly ambitious where there exist realistic limitations on the ability of experts to provide prior distributions for all relevant parameters. This research was motivated by situations where expert judgement exists to support the development of prior distributions describing the number of faults potentially inherent within a design but could not support useful descriptions of the rate at which they would be detected during a reliability-growth test. This paper develops inference properties for a reliability-growth model. The approach assumes a prior distribution for the ultimate number of faults that would be exposed if testing were to continue ad infinitum, but estimates the parameters of the intensity function empirically. A fixed-point iteration procedure to obtain the maximum likelihood estimate is investigated for bias and conditions of existence. The main purpose of this model is to support inference in situations where failure data are few. A procedure for providing statistical confidence intervals is investigated and shown to be suitable for small sample sizes. An application of these techniques is illustrated by an example
    • 

    corecore