137,256 research outputs found

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An adaptive sampling method for global sensitivity analysis based on least-squares support vector regression

    Get PDF
    In the field of engineering, surrogate models are commonly used for approximating the behavior of a physical phenomenon in order to reduce the computational costs. Generally, a surrogate model is created based on a set of training data, where a typical method for the statistical design is the Latin hypercube sampling (LHS). Even though a space filling distribution of the training data is reached, the sampling process takes no information on the underlying behavior of the physical phenomenon into account and new data cannot be sampled in the same distribution if the approximation quality is not sufficient. Therefore, in this study we present a novel adaptive sampling method based on a specific surrogate model, the least-squares support vector regresson. The adaptive sampling method generates training data based on the uncertainty in local prognosis capabilities of the surrogate model - areas of higher uncertainty require more sample data. The approach offers a cost efficient calculation due to the properties of the least-squares support vector regression. The opportunities of the adaptive sampling method are proven in comparison with the LHS on different analytical examples. Furthermore, the adaptive sampling method is applied to the calculation of global sensitivity values according to Sobol, where it shows faster convergence than the LHS method. With the applications in this paper it is shown that the presented adaptive sampling method improves the estimation of global sensitivity values, hence reducing the overall computational costs visibly

    FRAM for systemic accident analysis: a matrix representation of functional resonance

    Get PDF
    Due to the inherent complexity of nowadays Air Traffic Management (ATM) system, standard methods looking at an event as a linear sequence of failures might become inappropriate. For this purpose, adopting a systemic perspective, the Functional Resonance Analysis Method (FRAM) originally developed by Hollnagel, helps identifying non-linear combinations of events and interrelationships. This paper aims to enhance the strength of FRAM-based accident analyses, discussing the Resilience Analysis Matrix (RAM), a user-friendly tool that supports the analyst during the analysis, in order to reduce the complexity of representation of FRAM. The RAM offers a two dimensional representation which highlights systematically connections among couplings, and thus even highly connected group of couplings. As an illustrative case study, this paper develops a systemic accident analysis for the runway incursion happened in February 1991 at LAX airport, involving SkyWest Flight 5569 and USAir Flight 1493. FRAM confirms itself a powerful method to characterize the variability of the operational scenario, identifying the dynamic couplings with a critical role during the event and helping discussing the systemic effects of variability at different level of analysis

    Evaluation of Coordinated Ramp Metering (CRM) Implemented By Caltrans

    Get PDF
    Coordinated ramp metering (CRM) is a critical component of smart freeway corridors that rely on real-time traffic data from ramps and freeway mainline to improve decision-making by the motorists and Traffic Management Center (TMC) personnel. CRM uses an algorithm that considers real-time traffic volumes on freeway mainline and ramps and then adjusts the metering rates on the ramps accordingly for optimal flow along the entire corridor. Improving capacity through smart corridors is less costly and easier to deploy than freeway widening due to high costs associated with right-of-way acquisition and construction. Nevertheless, conversion to smart corridors still represents a sizable investment for public agencies. However, in the U.S. there have been limited evaluations of smart corridors in general, and CRM in particular, based on real operational data. This project examined the recent Smart Corridor implementation on Interstate 80 (I-80) in the Bay Area and State Route 99 (SR-99, SR99) in Sacramento based on travel time reliability measures, efficiency measures, and before-and-after safety evaluation using the Empirical Bayes (EB) approach. As such, this evaluation represents the most complete before-and-after evaluation of such systems. The reliability measures include buffer index, planning time, and measures from the literature that account for both the skew and width of the travel time distribution. For efficiency, the study estimates the ratio of vehicle miles traveled vs. vehicle hour traveled. The research contextualizes before-and-after comparisons for efficiency and reliability measures through similar measures from another corridor (i.e., the control corridor of I-280 in District 4 and I-5 in District 3) from the same region, which did not have CRM implemented. The results show there has been an improvement in freeway operation based on efficiency data. Post-CRM implementation, travel time reliability measures do not show a similar improvement. The report also provides a counterfactual estimate of expected crashes in the post-implementation period, which can be compared with the actual number of crashes in the “after” period to evaluate effectiveness

    Human Factor Aspects of Traffic Safety

    Get PDF

    Electricity from photovoltaic solar cells: Flat-Plate Solar Array Project final report. Volume VI: Engineering sciences and reliability

    Get PDF
    The Flat-Plate Solar Array (FSA) Project, funded by the U.S. Government and managed by the Jet Propulsion Laboratory, was formed in 1975 to develop the module/array technology needed to attain widespread terrestrial use of photovoltaics by 1985. To accomplish this, the FSA Project established and managed an Industry, University, and Federal Government Team to perform the needed research and development. This volume of the series of final reports documenting the FSA Project deals with the Project's activities directed at developing the engineering technology base required to achieve modules that meet the functional, safety and reliability requirements of large-scale terrestrial photovoltaic systems applications. These activities included: (1) development of functional, safety, and reliability requirements for such applications; (2) development of the engineering analytical approaches, test techniques, and design solutions required to meet the requirements; (3) synthesis and procurement of candidate designs for test and evaluation; and (4) performance of extensive testing, evaluation, and failure analysis to define design shortfalls and, thus, areas requiring additional research and development. During the life of the FSA Project, these activities were known by and included a variety of evolving organizational titles: Design and Test, Large-Scale Procurements, Engineering, Engineering Sciences, Operations, Module Performance and Failure Analysis, and at the end of the Project, Reliability and Engineering Sciences. This volume provides both a summary of the approach and technical outcome of these activities and provides a complete Bibliography (Appendix A) of the published documentation covering the detailed accomplishments and technologies developed
    • …
    corecore