14 research outputs found

    Detection of faults and software reliability analysis

    Get PDF
    Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults

    Documentaci贸n, edici贸n y traducci贸n: Tesauro de tipograf铆a y edici贸n espa帽ol-ingl茅s

    Get PDF
    En el presente art铆culo se analiza la elaboraci贸n de un tesauro de tipograf铆a y su correspondiente traducci贸n al ingl茅s. La necesidad de una obra de estas caracter铆sticas se justifica por la ausencia de un lenguaje controlado que recoja la terminolog铆a existente en lengua espa帽ola en este 谩mbito. Se ha desarrollado un tesauro especializado, multidisciplinar y biling眉e con estructura alfab茅tica y sistem谩tica. Para ello se ha contemplado la norma UNE 50-106-90. El proceso de traducci贸n ha seguido una metodolog铆a en tres pasos usando obras de referencia en la LO, en la LT y biling眉es espa帽ol-ingl茅s.A study about construction of a Typography Thesaurus and its translation into English is presented. A work of this type is needed due to the non-existence of a controlled language dealing with typographic terminology in Spanish. A specialized, multidisciplinary, bilingual Thesaurus with a systematic and alphabetical structure has been developed. It has been constructed to conform to UNE standard 50-106-90. Translation process has been carried out by means of a three-step method, using reference works both in the SL and the TL as well as bilingual Spanish-English dictionaries

    Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    Get PDF
    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc

    Software fault tolerance using data diversity

    Get PDF
    Research on data diversity is discussed. Data diversity relies on a different form of redundancy from existing approaches to software fault tolerance and is substantially less expensive to implement. Data diversity can also be applied to software testing and greatly facilitates the automation of testing. Up to now it has been explored both theoretically and in a pilot study, and has been shown to be a promising technique. The effectiveness of data diversity as an error detection mechanism and the application of data diversity to differential equation solvers are discussed

    Second generation experiments in fault tolerant software

    Get PDF
    The purpose of the Multi-Version Software (MVS) experiment is to obtain empirical measurements of the performance of multi-version systems. Twenty version of a program were prepared under reasonably realistic development conditions from the same specifications. The overall structure of the testing environment for the MVS experiment and its status are described. A preliminary version of the control system is described that was implemented for the MVS experiment to allow the experimenter to have control over the details of the testing. The results of an empirical study of error detection using self checks are also presented. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks

    Software Reliability Issues: An Experimental Approach

    Get PDF
    In this thesis, we present methodologies involving a data structure called the debugging graph whereby the predictive performance of software reliability models can be analyzed and improved under laboratory conditions. This procedure substitutes the averages of large sample sets for the single point samples normally used as inputs to these models and thus supports scrutiny of their performances with less random input data. Initially, we describe the construction of an extensive database of empirical reliability data which we derived by testing each partially debugged version of subject software represented by complete or partial debugging graphs. We demonstrate how these data can be used to assign relative sizes to known bugs and to simulate multiple debugging sessions. We then present the results from a series of proof-of-concept experiments. We show that controlling fault recovery order as represented by the data input to some well-known reliability models can enable them to produce more accurate predictions and can mitigate anomalous effects we attribute to manifestations of the fault interaction phenomenon. Since limited testing resources are common in the real world, we demonstrate the use of two approximation techniques, the surrogate oracle and path truncations, to render the application of our methodologies computationally feasible outside a laboratory setting. We report results which support the assertion that reliability data collected from just a partial debugging graph and subject to these approximations qualitatively agrees with those collected under ideal laboratory conditions, provided one accounts for optimistic bias introduced by the surrogate in later prediction stages. We outline an algorithmic approach for using data derived from a partial debugging graph to improve software reliability predictions, and show its complexity to be no worse than O(n2). We summarize some outstanding questions as areas for future investigations of and improvements to the software reliability prediction process
    corecore