40,133 research outputs found

    Reliability and maintainability assessment factors for reliable fault-tolerant systems

    Get PDF
    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics

    1991 NASA Life Support Systems Analysis workshop

    Get PDF
    The 1991 Life Support Systems Analysis Workshop was sponsored by NASA Headquarters' Office of Aeronautics and Space Technology (OAST) to foster communication among NASA, industrial, and academic specialists, and to integrate their inputs and disseminate information to them. The overall objective of systems analysis within the Life Support Technology Program of OAST is to identify, guide the development of, and verify designs which will increase the performance of the life support systems on component, subsystem, and system levels for future human space missions. The specific goals of this workshop were to report on the status of systems analysis capabilities, to integrate the chemical processing industry technologies, and to integrate recommendations for future technology developments related to systems analysis for life support systems. The workshop included technical presentations, discussions, and interactive planning, with time allocated for discussion of both technology status and time-phased technology development recommendations. Key personnel from NASA, industry, and academia delivered inputs and presentations on the status and priorities of current and future systems analysis methods and requirements

    Doctor of Philosophy

    Get PDF
    dissertationWater resources are limited and disproportionately distributed in time and place. Moreover, complex interactions among different components of the water system, changes in population and urbanization growth rates, and climate change have increased the uncertainty influencing water resource planning. The ultimate question arising for water managers considering the complexity of water systems is how to determine if management strategies are effective and improve the performance of a water system. Generally, decision-makers assess the system’s condition based on a univariate measure of reliability or vulnerability. However, these measures do not deliver sufficient information, and present a limited view about the system’s performance. There is a known need to study water resources in an integrated fashion to effectively manage for the present and the future. In this dissertation, a new comprehensive integrated modeling and performance assessment framework is offered. First, a new approach is designed to assess vulnerability of a water system based on important factors including exposure, sensitivity, severity, potential severity, social vulnerability, and adaptive capacity. Then, instead of an individual metric, the joint probability distribution of reliability and vulnerability based on copula function is developed to estimate a new index, the Water System Performance Index (WSPI), to evaluate the reliability and vulnerability of a water system simultaneously. To test the effectiveness of the framework and demonstrate the advances of the new performance index, a practical application is conducted for the Salt Lake City Department of Public Utilities (SLCDPU) water system. For this purpose, an integrated water resource management (IWRM) model is developed using system dynamics approach for the case study. Management alternatives are incorporated into the model using a decision support tool designed for use by water managers and stakeholders. Results of the study show an inconsistency in the degree of vulnerability between traditionally used and the new vulnerability assessment approaches. The use of the integrated model and new vulnerability approach is also shown to provide more informative guidance for decision makers evaluating alternative management strategies during failure events. Furthermore, results illustrate the effectiveness of the WSPI to identify critical conditions when there is a need for a combined measure of performance. In terms of water management decision making, the final results of this dissertation indicate centralized water storage solutions improve water system performance better than rainwater harvesting for the Salt Lake City case study

    Integration of software reliability into systems reliability optimization

    Get PDF
    Reliability optimization originally developed for hardware systems is extended to incorporate software into an integrated system reliability optimization. This hardware-software reliability optimization problem is formulated into a mixed-integer programming problem. The integer variables are the number of redundancies, while the real variables are the components reliabilities;To search a common framework under which hardware systems and software systems can be combined, a review and classification of existing software reliability models is conducted. A software redundancy model with common-cause failure is developed to represent the objective function. This model includes hardware redundancy with independent failure as a special case. A software reliability-cost function is then derived based on a binomial-type software reliability model to represent the constraint function;Two techniques, the combination of heuristic redundancy method with sequential search method, and the Lagrange multiplier method with the branch-and-bound method, are proposed to solve this mixed-integer reliability optimization problem. The relative merits of four major heuristic redundancy methods and two sequential search methods are investigated through a simulation study. The results indicate that the sequential search method is a dominating factor of the combination method. Comparison of the two proposed mixed-integer programming techniques is also studied by solving two numerical problems, a series system with linear constraints and a bridge system with nonlinear constraints. The Lagrange multiplier method with the branch-and-bound method has been shown to be superior to all other existing methods in obtaining the optimal solution;Finally an illustration is performed for integrating software reliability model into systems reliability optimization

    Availability Modeling of Modular Software

    No full text
    The attached file may be somewhat different from the published versionInternational audienceDependability evaluation is a basic component in the assessment of the quality of repairable systems. We develop here a general model specifically designed for software systems that allows the evaluation of different dependability metrics, in particular, of availability measures. The model is of the structural type, based on Markov process theory. In particular, it can be viewed as a attempt to overcome some limitations of the well-known Littlewood's reliability model for modular software. We give both the mathematical results necessary to the transient analysis of this general model and the algorithms that allow to evaluate it efficiently. More specifically, from the parameters describing : the evolution of the execution process when there is no failure, the failure processes together with the way they affect the execution, and the recovery process, we obtain the distribution function of the number of failures on a fixed mission period. In fact, we obtain dependability metrics which are much more informative than the usual ones given in a white-box approach. We briefly discuss the estimation procedures of the parameters of the model. From simple examples, we illustrate the interest in such a structural view and we explain how to take into account reliability growth of part of the software with the transformation approach developed by Laprie and al. Finally, the complete transient analysis of our model allows to discuss in our context the Poissonian approximation reported by Littlewood for its model

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    NASA SBIR abstracts of 1990 phase 1 projects

    Get PDF
    The research objectives of the 280 projects placed under contract in the National Aeronautics and Space Administration (NASA) 1990 Small Business Innovation Research (SBIR) Phase 1 program are described. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses in response to NASA's 1990 SBIR Phase 1 Program Solicitation. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 280, in order of its appearance in the body of the report. The document also includes Appendixes to provide additional information about the SBIR program and permit cross-reference in the 1990 Phase 1 projects by company name, location by state, principal investigator, NASA field center responsible for management of each project, and NASA contract number

    Bridges Structural Health Monitoring and Deterioration Detection Synthesis of Knowledge and Technology

    Get PDF
    INE/AUTC 10.0
    • …
    corecore