54,556 research outputs found

    Software reliability: Repetitive run experimentation and modeling

    Get PDF
    A software experiment conducted with repetitive run sampling is reported. Independently generated input data was used to verify that interfailure times are very nearly exponentially distributed and to obtain good estimates of the failure rates of individual errors and demonstrate how widely they vary. This fact invalidates many of the popular software reliability models now in use. The log failure rate of interfailure time was nearly linear as a function of the number of errors corrected. A new model of software reliability is proposed that incorporates these observations

    Statistical modelling of software reliability

    Get PDF
    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety

    Software reliability perspectives

    Get PDF
    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research

    Development of confidence limits by pivotal functions for estimating software reliability

    Get PDF
    The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model

    Research reports: 1991 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    The basic objectives of the programs, which are in the 28th year of operation nationally, are: (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate an exchange of ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of the participants' institutions; and (4) to contribute to the research objectives of the NASA Centers. The faculty fellows spent 10 weeks at MSFC engaged in a research project compatible with their interests and background and worked in collaboration with a NASA/MSFC colleague. This is a compilation of their research reports for summer 1991

    Measurement, estimation, and prediction of software reliability

    Get PDF
    Quantitative indices of software reliability are defined, and application of three important indices is indicated: (1) reliability measurement, (2) reliability estimation, and (3) reliability prediction. State of the art techniques for each of these procedures are presented together with considerations of data acquisition. Failure classifications and other documentation for comprehensive software reliability evaluation are described

    Software reliability: Additional investigations into modeling with replicated experiments

    Get PDF
    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed

    Supermassive Black Holes and Their Host Galaxies - I. Bulge luminosities from dedicated near-infrared data

    Full text link
    In an effort to secure, refine and supplement the relation between central Supermassive Black Hole masses (Mbh), and the bulge luminosities of their host galaxies, (Lbul), we obtained deep, high spatial resolution K-band images of 35 nearby galaxies with securely measured Mbh, using the wide-field WIRCam imager at the Canada-France-Hawaii-Telescope (CFHT). A dedicated data reduction and sky subtraction strategy was adopted to estimate the brightness and structure of the sky, a critical step when tracing the light distribution of extended objects in the near-infrared. From the final image product, bulge and total magnitudes were extracted via two-dimensional profile fitting. As a first order approximation, all galaxies were modeled using a simple Sersic-bulge + exponential-disk decomposition. However, we found that such models did not adequately describe the structure that we observe in a large fraction of our sample galaxies which often include cores, bars, nuclei, inner disks, spiral arms, rings and envelopes. In such cases, we adopted profile modifications and/or more complex models with additional components. The derived bulge magnitudes are very sensitive to the details and number of components used in the models, although total magnitudes remain almost unaffected. Usually, but not always, the luminosities and sizes of the bulges are overestimated when a simple bulge+disk decomposition is adopted in lieu of a more complex model. Furthermore we found that some spheroids are not well fit when the ellipticity of the Sersic model is held fixed. This paper presents the details of the image processing and analysis, while in a companion paper we discuss how model-induced biases and systematics in bulge magnitudes impact the Mbh-Lbul relation.Comment: 48 pages, 40 Figures, 5 tables; high-resolution figures and a corresponding version of the .pdf are available at https://www.dropbox.com/sh/lx0xqn89wa3y320/2hS-zZ12Y

    Multistage Accelerated Reliability Growth Testing Model and Data Analysis

    Get PDF
    Accelerated reliability growth testing has recently received a renewed interest in reliability engineering. The concepts of accelerated testing and reliability growth individually have been used in a variety of applications, either for hardware systems or software systems. The advantage of using a combined strategy is that it could shorten the testing time while maximizing the reliability. In the literature, there are many references related to optimal test design for reliability from either a component level or a system level. In this research, we suggest an approach which conducts accelerated testing at the component level while supporting estimates of reliability at the system level. Our approach helps one decide where and at what level to conduct accelerated test during the system design and testing process. Our approach is designed to reduce testing cost while still demonstrating that system level requirements are met. We do this testing at lower levels in an accelerated environment, where costs are lower, and minimize the amount of testing at the higher integrated system level where it tends to be more expensive
    • …
    corecore