651,448 research outputs found

    Reliability training

    Get PDF
    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated

    Validation of Ultrahigh Dependability for Software-Based Systems

    Get PDF
    Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software

    Software acquisition by start-up companies

    Get PDF
    Software acquisition is very important for any type of company nowadays, including start-up companies. This study examined which software applications are acquired by start-ups, in what ways, with what motivations, and for what purposes they are used. The study was done by first conducting a survey amongst 50 start-up companies in the Netherlands and Sweden and by then doing four follow-up interviews with companies that also participated in the survey. Results showed that start-ups mostly acquire software for communication purposes, and that start-ups mainly use Freeware and Single Licensed software. Most of the time decisions about software acquisition are being made by the CEO, sometimes with help of colleagues, friends or other informal contacts. Popular applications include, amongst others, software packages as Google Apps and Microsoft Office. Reasons behind choosing for a specific software application were mainly ease of use, familiarity, requirement fit, reliability, flexibility and gradual scaling. Reasons to use free software options were mainly budget-related, however, reliability and quality were perceived to be really important, especially when it comes to customer-serving applications. Start-up companies therefore said to be willing to pay for these applications if reliability is higher in that case

    On a method for mending time to failure distributions

    Get PDF
    Many software reliability growth models assume that the time to next failure may be infinite; i.e., there is a chance that no failure will occur at all. For most software products this is too good to be true even after the testing phase. Moreover, if a non-zero probability is assigned to an infinite time to failure, metrics like the mean time to failure do not exist. In this paper, we try to answer several questions: Under what condition does a model permit an infinite time to next failure? Why do all finite failures non-homogeneous Poisson process (NHPP) models share this property? And is there any transformation mending the time to failure distributions? Indeed, such a transformation exists; it leads to a new family of NHPP models. We also show how the distribution function of the time to first failure can be used for unifying finite failures and infinite failures NHPP models. --software reliability growth model,non-homogeneous Poisson process,defective distribution,(mean) time to failure,model unification

    A tutorial on Bayesian single-test reliability analysis with JASP

    Get PDF
    The current practice of reliability analysis is both uniform and troublesome: most reports consider only Cronbach’s α, and almost all reports focus exclusively on a point estimate, disregarding the impact of sampling error. In an attempt to improve the status quo we have implemented Bayesian estimation routines for five popular single-test reliability coefficients in the open-source statistical software program JASP. Using JASP, researchers can easily obtain Bayesian credible intervals to indicate a range of plausible values and thereby quantify the precision of the point estimate. In addition, researchers may use the posterior distribution of the reliability coefficients to address practically relevant questions such as “What is the probability that the reliability of my test is larger than a threshold value of .80?”. In this tutorial article, we outline how to conduct a Bayesian reliability analysis in JASP and correctly interpret the results. By making available a computationally complex procedure in an easy-to-use software package, we hope to motivate researchers to include uncertainty estimates whenever reporting the results of a single-test reliability analysis

    Software safety

    Get PDF
    Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful

    Advanced analytics for transformer asset management

    Get PDF
    Power transformers are one of the most crucial components of any power system network. A new asset management software called APM Edge, based on the reliability centred maintenance (RCM) methodology for the fleet-wide assessment of power transformers that utilises the principle of fault tree analysis is now available. This analytical software is an expert system that incorporates a probabilistic model which always assigns a risk factor to any given transformer – both for longterm reliability and short-term functionality. This paper presents a case study on the utilisation of this expert system and analytical software on a 25 MVA transformer which helped in: • DGA data quality identification • Predicting future dissolved gas trends • Predicting when the DGA abnormal levels would be reached • Time available before the shutdown • Determining what investigations are required
    • …
    corecore