301 research outputs found

    Credal Networks under Epistemic Irrelevance

    Get PDF
    A credal network under epistemic irrelevance is a generalised type of Bayesian network that relaxes its two main building blocks. On the one hand, the local probabilities are allowed to be partially specified. On the other hand, the assessments of independence do not have to hold exactly. Conceptually, these two features turn credal networks under epistemic irrelevance into a powerful alternative to Bayesian networks, offering a more flexible approach to graph-based multivariate uncertainty modelling. However, in practice, they have long been perceived as very hard to work with, both theoretically and computationally. The aim of this paper is to demonstrate that this perception is no longer justified. We provide a general introduction to credal networks under epistemic irrelevance, give an overview of the state of the art, and present several new theoretical results. Most importantly, we explain how these results can be combined to allow for the design of recursive inference methods. We provide numerous concrete examples of how this can be achieved, and use these to demonstrate that computing with credal networks under epistemic irrelevance is most definitely feasible, and in some cases even highly efficient. We also discuss several philosophical aspects, including the lack of symmetry, how to deal with probability zero, the interpretation of lower expectations, the axiomatic status of graphoid properties, and the difference between updating and conditioning

    Bayesian networks with imprecise datasets : application to oscillating water column

    Get PDF
    The Bayesian Network approach is a probabilistic method with an increasing use in the risk assessment of complex systems. It has proven to be a reliable and powerful tool with the flexibility to include different types of data (from experimental data to expert judgement). The incorporation of system reliability methods allows traditional Bayesian networks to work with random variables with discrete and continuous distributions. On the other hand, probabilistic uncertainty comes from the complexity of reality that scientists try to reproduce by setting a controlled experiment, while imprecision is related to the quality of the specific instrument making the measurements. This imprecision or lack of data can be taken into account by the use of intervals and probability boxes as random variables in the network. The resolution of the system reliability problems to deal with these kinds of uncertainties has been carried out adopting Monte Carlo simulations. However, the latter method is computationally expensive preventing from producing a real-time analysis of the system represented by the network. In this work, the line sampling algorithm is used as an effective method to improve the efficiency of the reduction process from enhanced to traditional Bayesian networks. This allows to preserve all the advantages without increasing excessively the computational cost of the analysis. As an application example, a risk assessment of an oscillating water column is carried out using data obtained in the laboratory. The proposed method is run using the multipurpose software OpenCossan

    Advanced Bayesian networks for reliability and risk analysis in geotechnical engineering

    Get PDF
    The stability and deformation problems of soil have been a research topic of great concern since the past decades. The potential catastrophic events are induced by various complex factors, such as uncertain geotechnical conditions, external environment, and anthropogenic influence, etc. To prevent the occurrence of disasters in geotechnical engineering, the main purpose of this study is to enhance the Bayesian networks (BNs) model for quantifying the uncertainty and predicting the risk level in solving the geotechnical problems. The advanced BNs model is effective for analyzing the geotechnical problems in the poor data environment. The advanced BNs approach proposed in this study is applied to solve the stability of soil slopes problem associated with the specific-site data. When probabilistic models for soil properties are adopted, enhanced BNs approach was adopted to cope with continuous input parameters. On the other hand, Credal networks (CNs), developed on the basis of BNs, are specially used for incomplete input information. In addition, the probabilities of slope failure are also investigated for different evidences. A discretization approach for the enhanced BNs is applied in the case of evidence entering into the continuous nodes. Two examples implemented are to demonstrate the feasibility and predictive effectiveness of the BNs model. The results indicate the enhanced BNs show a precisely low risk for the slope studied. Unlike the BNs, the results of CNs are presented with bounds. The comparison of three different input information reveals the more imprecision in input, the more uncertainty in output. Both of them can provide the useful disaster-induced information for decision-makers. According to the information updating in the models, the position of the water table shows a significant role in the slope failure, which is controlled by the drainage states. Also, it discusses how the different types of BNs contribute to assessing the reliability and risk of real slopes, and how new information could be introduced in the analysis. The proposed models in this study illustrate the advanced BN model is a good diagnosis tool for estimating the risk level of the slope failure. In a follow-up study, the BNs model is developed based on its potential capability for the information updating and importance measure. To reduce the influence of uncertainty, with the proposed BN model, the soil parameters are updated accurately during the excavation process, and besides, the contribution of epistemic uncertainty from geotechnical parameters to the potential disaster can be characterized based on the developed BN model. The results of this study indicate the BNs model is an effective and flexible tool for risk analysis and decision making support in geotechnical engineering

    Continuous Improvement Through Knowledge-Guided Analysis in Experience Feedback

    Get PDF
    Continuous improvement in industrial processes is increasingly a key element of competitiveness for industrial systems. The management of experience feedback in this framework is designed to build, analyze and facilitate the knowledge sharing among problem solving practitioners of an organization in order to improve processes and products achievement. During Problem Solving Processes, the intellectual investment of experts is often considerable and the opportunities for expert knowledge exploitation are numerous: decision making, problem solving under uncertainty, and expert configuration. In this paper, our contribution relates to the structuring of a cognitive experience feedback framework, which allows a flexible exploitation of expert knowledge during Problem Solving Processes and a reuse such collected experience. To that purpose, the proposed approach uses the general principles of root cause analysis for identifying the root causes of problems or events, the conceptual graphs formalism for the semantic conceptualization of the domain vocabulary and the Transferable Belief Model for the fusion of information from different sources. The underlying formal reasoning mechanisms (logic-based semantics) in conceptual graphs enable intelligent information retrieval for the effective exploitation of lessons learned from past projects. An example will illustrate the application of the proposed approach of experience feedback processes formalization in the transport industry sector

    $1.00 per RT #BostonMarathon #PrayForBoston: analyzing fake content on Twitter

    Get PDF
    This study found that 29% of the most viral content on Twitter during the Boston bombing crisis were rumors and fake content.AbstractOnline social media has emerged as one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. We analyzed one such media i.e. Twitter, for content generated during the event of Boston Marathon Blasts, that occurred on April, 15th, 2013. A lot of fake content and malicious profiles originated on Twitter network during this event. The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral. Our results showed that 29% of the most viral content on Twitter, during the Boston crisis were rumors and fake content; while 51% was generic opinions and comments; and rest was true information. We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. Next, we used regression prediction model, to verify that, overall impact of all users who propagate the fake content at a given time, can be used to estimate the growth of that content in future. Many malicious accounts were created on Twitter during the Boston event, that were later suspended by Twitter. We identified over six thousand such user profiles, we observed that the creation of such profiles surged considerably right after the blasts occurred. We identified closed community structure and star formation in the interaction network of these suspended profiles amongst themselves
    • 

    corecore