476 research outputs found

    Markov Chain Monte Carlo joint analysis of Chandra X-ray imaging spectroscopy and Sunyaev-Zeldovich Effect data

    Full text link
    X-ray and Sunyaev-Zeldovich Effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from the Chandra Observatory, which provides both spatial and spectral information, and Sunyaev-Zeldovich Effect data were obtained from the BIMA and OVRO arrays. We introduce a Markov chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev-Zeldovich Effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster Abell 611. Comparisons with traditional likelihood-ratio methods reveal the robustness of the method. This method will be used in follow-up papers to determine the distances to a large sample of galaxy clusters.Comment: ApJ accepted, scheduled for ApJ 10 October 2004, v614 issue. Title changed, added more convergence diagnostic tests, Figure 7 converted to lower resolution for easier download, other minor change

    Stress Rupture Life Reliability Measures for Composite Overwrapped Pressure Vessels

    Get PDF
    Composite Overwrapped Pressure Vessels (COPVs) are often used for storing pressurant gases onboard spacecraft. Kevlar (DuPont), glass, carbon and other more recent fibers have all been used as overwraps. Due to the fact that overwraps are subjected to sustained loads for an extended period during a mission, stress rupture failure is a major concern. It is therefore important to ascertain the reliability of these vessels by analysis, since the testing of each flight design cannot be completed on a practical time scale. The present paper examines specifically a Weibull statistics based stress rupture model and considers the various uncertainties associated with the model parameters. The paper also examines several reliability estimate measures that would be of use for the purpose of recertification and for qualifying flight worthiness of these vessels. Specifically, deterministic values for a point estimate, mean estimate and 90/95 percent confidence estimates of the reliability are all examined for a typical flight quality vessel under constant stress. The mean and the 90/95 percent confidence estimates are computed using Monte-Carlo simulation techniques by assuming distribution statistics of model parameters based also on simulation and on the available data, especially the sample sizes represented in the data. The data for the stress rupture model are obtained from the Lawrence Livermore National Laboratories (LLNL) stress rupture testing program, carried out for the past 35 years. Deterministic as well as probabilistic sensitivities are examined

    Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    Get PDF
    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling

    A RISK-INFORMED DECISION-MAKING METHODOLOGY TO IMPROVE LIQUID ROCKET ENGINE PROGRAM TRADEOFFS

    Get PDF
    This work provides a risk-informed decision-making methodology to improve liquid rocket engine program tradeoffs with the conflicting areas of concern affordability, reliability, and initial operational capability (IOC) by taking into account psychological and economic theories in combination with reliability engineering. Technical program risks are associated with the number of predicted failures of the test-analyze-and-fix (TAAF) cycle that is based on the maturity of the engine components. Financial and schedule program risks are associated with the epistemic uncertainty of the models that determine the measures of effectiveness in the three areas of concern. The affordability and IOC models' inputs reflect non-technical and technical factors such as team experience, design scope, technology readiness level, and manufacturing readiness level. The reliability model introduces the Reliability- As-an-Independent-Variable (RAIV) strategy that aggregates fictitious or actual hotfire tests of testing profiles that differ from the actual mission profile to estimate the system reliability. The main RAIV strategy inputs are the physical or functional architecture of the system, the principal test plan strategy, a stated reliability-bycredibility requirement, and the failure mechanisms that define the reliable life of the system components. The results of the RAIV strategy, which are the number of hardware sets and number of hot-fire tests, are used as inputs to the affordability and the IOC models. Satisficing within each tradeoff is attained by maximizing the weighted sum of the normalized areas of concern subject to constraints that are based on the decision-maker's targets and uncertainty about the affordability, reliability, and IOC using genetic algorithms. In the planning stage of an engine program, the decision variables of the genetic algorithm correspond to fictitious hot-fire tests that include TAAF cycle failures. In the program execution stage, the RAIV strategy is used as reliability growth planning, tracking, and projection model. The main contributions of this work are the development of a comprehensible and consistent risk-informed tradeoff framework, the RAIV strategy that links affordability and reliability, a strategy to define an industry or government standard or guideline for liquid rocket engine hot-fire test plans, and an alternative to the U.S. Crow/AMSAA reliability growth model applying the RAIV strategy

    Identification of Causal Paths and Prediction of Runway Incursion Risk using Bayesian Belief Networks

    Get PDF
    In the U.S. and worldwide, runway incursions are widely acknowledged as a critical concern for aviation safety. However, despite widespread attempts to reduce the frequency of runway incursions, the rate at which these events occur in the U.S. has steadily risen over the past several years. Attempts to analyze runway incursion causation have been made, but these methods are often limited to investigations of discrete events and do not address the dynamic interactions that lead to breaches of runway safety. While the generally static nature of runway incursion research is understandable given that data are often sparsely available, the unmitigated rate at which runway incursions take place indicates a need for more comprehensive risk models that extend currently available research. This dissertation summarizes the existing literature, emphasizing the need for cross-domain methods of causation analysis applied to runway incursions in the U.S. and reviewing probabilistic methodologies for reasoning under uncertainty. A holistic modeling technique using Bayesian Belief Networks as a means of interpreting causation even in the presence of sparse data is outlined in three phases: causal factor identification, model development, and expert elicitation, with intended application at the systems or regulatory agency level. Further, the importance of investigating runway incursions probabilistically and incorporating information from human factors, technological, and organizational perspectives is supported. A method for structuring a Bayesian network using quantitative and qualitative event analysis in conjunction with structured expert probability estimation is outlined and results are presented for propagation of evidence through the model as well as for causal analysis. In this research, advances in the aggregation of runway incursion data are outlined, and a means of combining quantitative and qualitative information is developed. Building upon these data, a method for developing and validating a Bayesian network while maintaining operational transferability is also presented. Further, the body of knowledge is extended with respect to structured expert judgment, as operationalization is combined with elicitation of expert data to create a technique for gathering expert assessments of probability in a computationally compact manner while preserving mathematical accuracy in rank correlation and dependence structure. The model developed in this study is shown to produce accurate results within the U.S. aviation system, and to provide a dynamic, inferential platform for future evaluation of runway incursion causation. These results in part confirm what is known about runway incursion causation, but more importantly they shed more light on multifaceted causal interactions and do so in a modeling space that allows for causal inference and evaluation of changes to the system in a dynamic setting. Suggestions for future research are also discussed, most prominent of which is that this model allows for robust and flexible assessment of mitigation strategies within a holistic model of runway safety

    Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions

    Get PDF
    This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is

    Evaluating instructional designs with mental workload assessments in university classrooms

    Get PDF
    Cognitive cognitive load theory (CLT) has been conceived for improving instructional design practices. Although researched for many years, one open problem is a clear definition of its cognitive load types and their aggregation towards an index of overall cognitive load. In Ergonomics, the situation is different with plenty of research devoted to the development of robust constructs of mental workload (MWL). By drawing a parallel between CLT and MWL, as well as by integrating relevant theories and measurement techniques from these two fields, this paper is aimed at investigating the reliability, validity and sensitivity of three existing self-reporting mental workload measures when applied to long learning sessions, namely, the NASA Task Load index, the Workload Profile and the Rating Scale Mental Effort, in a typical university classroom. These measures were aimed at serving for the evaluation of two instructional conditions. Evidence suggests these selected measures are reliable and their moderate validity is in line with results obtained within Ergonomics. Additionally, an analysis of their sensitivity by employing the descriptive Harrell-Davis estimator suggests that the Workload Profile is more sensitive than the Nasa Task Load Index and the Rating Scale Mental Effort for long learning sessions
    • …
    corecore