8,204 research outputs found

    Which predictive uncertainty to resolve? Value of information sensitivity analysis for environmental decision models

    Get PDF
    Unidad de excelencia María de Maeztu CEX2019-000940-MUncertainties in environmental decisions are large, but resolving them is costly. We provide a framework for value of information (VoI) analysis to identify key predictive uncertainties in a decision model. The approach addresses characteristics that complicate this analysis in environmental management: dependencies in the probability distributions of predictions, trade-offs between multiple objectives, and divergent stakeholder perspectives. For a coral reef fisheries case, we predict ecosystem and fisheries trajectories given different management alternatives with an agent-based model. We evaluate the uncertain predictions with preference models based on utility theory to find optimal alternatives for stakeholders. Using the expected value of partially perfect information (EVPPI), we measure how relevant resolving uncertainty for various decision attributes is. The VoI depends on the stakeholder preferences, but not directly on the width of an attribute's probability distribution. Our approach helps reduce costs in structured decision-making processes by prioritizing data collection efforts

    Holistic Temporal Situation Interpretation for Traffic Participant Prediction

    Get PDF
    For a profound understanding of traffic situations including a prediction of traf- fic participants’ future motion, behaviors and routes it is crucial to incorporate all available environmental observations. The presence of sensor noise and depen- dency uncertainties, the variety of available sensor data, the complexity of large traffic scenes and the large number of different estimation tasks with diverging requirements require a general method that gives a robust foundation for the de- velopment of estimation applications. In this work, a general description language, called Object-Oriented Factor Graph Modeling Language (OOFGML), is proposed, that unifies formulation of esti- mation tasks from the application-oriented problem description via the choice of variable and probability distribution representation through to the inference method definition in implementation. The different language properties are dis- cussed theoretically using abstract examples. The derivation of explicit application examples is shown for the automated driv- ing domain. A domain-specific ontology is defined which forms the basis for four exemplary applications covering the broad spectrum of estimation tasks in this domain: Basic temporal filtering, ego vehicle localization using advanced interpretations of perceived objects, road layout perception utilizing inter-object dependencies and finally highly integrated route, behavior and motion estima- tion to predict traffic participant’s future actions. All applications are evaluated as proof of concept and provide an example of how their class of estimation tasks can be represented using the proposed language. The language serves as a com- mon basis and opens a new field for further research towards holistic solutions for automated driving

    A General Cause Based Methodology for Analysis of Dependent Failures in System Risk and Reliability Assessments

    Get PDF
    Traditional parametric Common Cause Failure (CCF) models quantify the soft dependencies between component failures through the use of empirical ratio relationships. Furthermore CCF modeling has been essentially restricted to identical components in redundant formations. While this has been advantageous in allowing the prediction of system reliability with little or no data, it has been prohibitive in other applications such as modeling the characteristics of a system design or including the characteristics of failure when assessing the risk significance of a failure or degraded performance event (known as an event assessment). This dissertation extends the traditional definition of CCF to model soft dependencies between like and non-like components. It does this through the explicit modeling of soft dependencies between systems (coupling factors) such as sharing a maintenance team or sharing a manufacturer. By modeling the soft dependencies explicitly these relationships can be individually quantified based on the specific design of the system and allows for more accurate event assessment given knowledge of the failure cause. Since the most data informed model in use is the Alpha Factor Model (AFM), it has been used as the baseline for the proposed solutions. This dissertation analyzes the US Nuclear Regulatory Commission's Common Cause Failure Database event data to determine the suitability of the data and failure taxonomy for use in the proposed cause-based models. Recognizing that CCF events are characterized by full or partial presence of "root cause" and "coupling factor" a refined failure taxonomy is proposed which provides a direct link between the failure cause category and the coupling factors. This dissertation proposes two CCF models (a) Partial Alpha Factor Model (PAFM) that accounts for the relevant coupling factors based on system design and provide event assessment with knowledge of the failure cause, and (b)General Dependency Model (GDM),which uses Bayesian Network to model the soft dependencies between components. This is done through the introduction of three parameters for each failure cause that relate to component fragility, failure cause rate, and failure cause propagation probability

    Crowdsourcing for Engineering Design: Objective Evaluations and Subjective Preferences

    Full text link
    Crowdsourcing enables designers to reach out to large numbers of people who may not have been previously considered when designing a new product, listen to their input by aggregating their preferences and evaluations over potential designs, aiming to improve ``good'' and catch ``bad'' design decisions during the early-stage design process. This approach puts human designers--be they industrial designers, engineers, marketers, or executives--at the forefront, with computational crowdsourcing systems on the backend to aggregate subjective preferences (e.g., which next-generation Brand A design best competes stylistically with next-generation Brand B designs?) or objective evaluations (e.g., which military vehicle design has the best situational awareness?). These crowdsourcing aggregation systems are built using probabilistic approaches that account for the irrationality of human behavior (i.e., violations of reflexivity, symmetry, and transitivity), approximated by modern machine learning algorithms and optimization techniques as necessitated by the scale of data (millions of data points, hundreds of thousands of dimensions). This dissertation presents research findings suggesting the unsuitability of current off-the-shelf crowdsourcing aggregation algorithms for real engineering design tasks due to the sparsity of expertise in the crowd, and methods that mitigate this limitation by incorporating appropriate information for expertise prediction. Next, we introduce and interpret a number of new probabilistic models for crowdsourced design to provide large-scale preference prediction and full design space generation, building on statistical and machine learning techniques such as sampling methods, variational inference, and deep representation learning. Finally, we show how these models and algorithms can advance crowdsourcing systems by abstracting away the underlying appropriate yet unwieldy mathematics, to easier-to-use visual interfaces practical for engineering design companies and governmental agencies engaged in complex engineering systems design.PhDDesign ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133438/1/aburnap_1.pd
    • …
    corecore