487 research outputs found

    Uncertainty Assessment in High-Risk Environments Using Probability, Evidence Theory and Expert Judgment Elicitation

    Get PDF
    The level of uncertainty in advanced system design is assessed by comparing the results of expert judgment elicitation to probability and evidence theory. This research shows how one type of monotone measure, namely Dempster-Shafer Theory of Evidence can expand the framework of uncertainty to provide decision makers a more robust solution space. The issues imbedded in this research are focused on how the relevant predictive uncertainty produced by similar action is measured. This methodology uses the established approach from traditional probability theory and Dempster-Shafer evidence theory to combine two classes of uncertainty, aleatory and epistemic. Probability theory provides the mathematical structure traditionally used in the representation of aleatory uncertainty. The uncertainty in analysis outcomes is represented by probability distributions and typically summarized as Complimentary Cumulative Distribution Functions (CCDFs). The main components of this research are probability of X in the probability theory compared to mx in evidence theory. Using this comparison, an epistemic model is developed to obtain the upper “CCPF - Complimentary Cumulative Plausibility Function” limits and the lower “CCBF - Complimentary Cumulative Belief Function” limits compared to the traditional probability function. A conceptual design for the Thermal Protection System (TPS) of future Crew Exploration Vehicles (CEV) is used as an initial test case. A questionnaire is tailored to elicit judgment from experts in high-risk environments. Based on description and characteristics, the answers of the questionnaire produces information, that serves as qualitative semantics used for the evidence theory functions. The computational mechanism provides a heuristic approach for the compilation and presentation of the results. A follow-up evaluation serves as validation of the findings and provides useful information in terms of consistency and adoptability to other domains. The results of this methodology provide a useful and practical approach in conceptual design to aid the decision maker in assessing the level of uncertainty of the experts. The methodology presented is well-suited for decision makers that encompass similar conceptual design instruments

    EMPIRICAL COMPARISON OF METHODS FOR THE HIERARCHICAL PROPAGATION OF HYBRID UNCERTAINTY IN RISK ASSESSMENT, IN PRESENCE OF DEPENDENCES

    No full text
    Risk analysis models describing aleatory (i.e., random) events contain parameters (e.g., probabilities, failure rates, ...) that are epistemically-uncertain, i.e., known with poor precision. Whereas aleatory uncertainty is always described by probability distributions, epistemic uncertainty may be represented in different ways (e.g., probabilistic or possibilistic), depending on the information and data available. The work presented in this paper addresses the issue of accounting for (in)dependence relationships between epistemically-uncertain parameters. When a probabilistic representation of epistemic uncertainty is considered, uncertainty propagation is carried out by a two-dimensional (or double) Monte Carlo (MC) simulation approach; instead, when possibility distributions are used, two approaches are undertaken: the hybrid MC and Fuzzy Interval Analysis (FIA) method and the MC-based Dempster-Shafer (DS) approach employing Independent Random Sets (IRSs). The objectives are: i) studying the effects of (in)dependence between the epistemically-uncertain parameters of the aleatory probability distributions (when a probabilistic/possibilistic representation of epistemic uncertainty is adopted) and ii) studying the effect of the probabilistic/possibilistic representation of epistemic uncertainty (when the state of dependence between the epistemic parameters is defined). The Dependency Bound Convolution (DBC) approach is then undertaken within a hierarchical setting of hybrid (probabilistic and possibilistic) uncertainty propagation, in order to account for all kinds of (possibly unknown) dependences between the random variables. The analyses are carried out with reference to two toy examples, built in such a way to allow performing a fair quantitative comparison between the methods, and evaluating their rationale and appropriateness in relation to risk analysis

    Context classification for service robots

    Get PDF
    This dissertation presents a solution for environment sensing using sensor fusion techniques and a context/environment classification of the surroundings in a service robot, so it could change his behavior according to the different rea-soning outputs. As an example, if a robot knows he is outdoors, in a field environment, there can be a sandy ground, in which it should slow down. Contrariwise in indoor environments, that situation is statistically unlikely to happen (sandy ground). This simple assumption denotes the importance of context-aware in automated guided vehicles

    Context Aware Computing for The Internet of Things: A Survey

    Get PDF
    As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201

    Fusing Automatically Extracted Annotations for the Semantic Web

    Get PDF
    This research focuses on the problem of semantic data fusion. Although various solutions have been developed in the research communities focusing on databases and formal logic, the choice of an appropriate algorithm is non-trivial because the performance of each algorithm and its optimal configuration parameters depend on the type of data, to which the algorithm is applied. In order to be reusable, the fusion system must be able to select appropriate techniques and use them in combination. Moreover, because of the varying reliability of data sources and algorithms performing fusion subtasks, uncertainty is an inherent feature of semantically annotated data and has to be taken into account by the fusion system. Finally, the issue of schema heterogeneity can have a negative impact on the fusion performance. To address these issues, we propose KnoFuss: an architecture for Semantic Web data integration based on the principles of problem-solving methods. Algorithms dealing with different fusion subtasks are represented as components of a modular architecture, and their capabilities are described formally. This allows the architecture to select appropriate methods and configure them depending on the processed data. In order to handle uncertainty, we propose a novel algorithm based on the Dempster-Shafer belief propagation. KnoFuss employs this algorithm to reason about uncertain data and method results in order to refine the fused knowledge base. Tests show that these solutions lead to improved fusion performance. Finally, we addressed the problem of data fusion in the presence of schema heterogeneity. We extended the KnoFuss framework to exploit results of automatic schema alignment tools and proposed our own schema matching algorithm aimed at facilitating data fusion in the Linked Data environment. We conducted experiments with this approach and obtained a substantial improvement in performance in comparison with public data repositories
    • …
    corecore