16 research outputs found
An approximation of surprise index as a measure of confidence
Probabilistic graphical models, such as Bayesian networks, are intuitive and theoretically sound tools for modeling uncertainty. A major problem with applying Bayesian networks in practice is that it is hard to judge whether a model fits well a case that it is supposed to solve. One way of expressing a possible dissonance between a model and a case is the surprise index, proposed by Habbema, which expresses the degree of surprise by the evidence given the model. While this measure reflects the intuition that the probability of a case should be judged in the context of a model, it is computationally intractable. In this paper, we propose an efficient way of approximating the surprise index
Participatory modelling for stakeholder involvement in the development of flood risk management intervention options
Advancing stakeholder participation beyond consultation offers a range of benefits for local flood risk management, particularly as responsibilities are increasingly devolved to local levels. This paper details the design and implementation of a participatory approach to identify intervention options for managing local flood risk. Within this approach, Bayesian networks were used to generate a conceptual model of the local flood risk system, with a particular focus on how different interventions might achieve each of nine participant objectives. The model was co-constructed by flood risk experts and local stakeholders. The study employs a novel evaluative framework, examining both the process and its outcomes (short-term substantive and longer-term social benefits). It concludes that participatory modelling techniques can facilitate the identification of intervention options by a wide range of stakeholders, and prioritise a subset for further investigation. They can help support a broader move towards active stakeholder participation in local flood risk management
Prognostic Modelling with Dynamic Bayesian Networks
In this paper, we review the application of dynamic Bayesian networks to
prognostic modelling. An example is provided for illustration. With this
example, we show how the equipment’s reliability decays over time in the
situation where repair is not possible and then how a simple change to the model
allows us to represent different maintenance policies for repairable equipme
Developing a Decision Analytic Framework Based on Influence Diagrams in Relation to Mass Evacuations
Presented at International Conference on Emergency Preparedness "The Challenges of Mass Evacuation" 21st - 23rd September 2010
Aston Business SchoolIn this paper, we examine the role which decision analysis can play in a situation
requiring a mass evacuation. In particular, we focus on the influence diagram as a
tool for reasoning and supporting decision-makers under conditions of risk and
uncertainty. This powerful modelling tool can help to bridge multiple specialist
domains and provide a common framework for supporting decision-makers in
different agencies.
An influence diagram is also referred to as a decision network and can be
considered as an extension of a Bayesian network. Like a Bayesian network, it
contains chance nodes which represent random variables and deterministic nodes
which represent deterministic functions of input variables. However, in addition,
an influence diagram contains decision nodes which represent decisions under
local control and utility nodes which can represent a variety of costs and benefits.
These might be measured in several dimensions including casualties and monetary
units. Advantages of Bayesian networks and influence diagrams over more
traditional risk and safety modelling approaches such as event trees and fault trees
are discussed - in particular, the ease with which they represent dependencies
between many factors and the different types of reasoning supported at the same
time, e.g. predictive reasoning and diagnostic reasoning.
An illustrative, generic influence diagram is presented of a situation
corresponding to a CBRNE attack. We then consider how this generic model can
be applied to a more specific scenario such as an attack at a sporting event. A
variety of potential uses of the model are identified and discussed, along with
problems which are likely to be encountered in model development. We argue that
this modelling approach provides a useful framework to support cost-effectiveness
studies and high-level trade-offs between alternative possible security measures
and other resources impacting on response and recovery operations
Zastosowanie fuzji danych z czujników i eksploracji danych w prognozowaniu stężenia metanu w kopalniach węgla
In recent years we have experienced unprecedented increase of use of sensors in many industrial applications. Modern sensors are capable of not only generating large volumes of data but as well transmit ting that data through network and storing it for further analysis. These enable to create systems capable of real-time data fusion in order to predict events of interest. The goal of this work is to predict methane concentration levels in coal mines using data fusion and data mining techniques. The paper describes an application of a generic method that can be applied to arbitrary set of multivariate time series data in order to perform classification or regression tasks. The solution presented here was developed within the framework of IJCRS‘15 data mining competition and resulted in the winning model outperforming other solutions.W ostatnich latach można było zaobserwować niespotykany wzrost użycia czujników w wielu zastosowaniach przemysłowych. Nowoczesne czujniki są w stanie nie tylko generować duże ilości danych, lecz równie ż przysyłać te dane za pomocą sieci i przechowywać je do późniejszej analizy. Umożliwia to opracowanie systemów do łączenia danych w czasie rzeczywistym w celu prognozowania określonych zdarzeń. Celem niniejszej pracy jest prognozowanie poziomów stężenia m etanu w kopalniach węgla za pomoc ą technik fuzji danych i eksploracji danych. Artykuł przedstawia zastosowanie generycznej metody, która może być użyta do dowolnego zbioru danych wielowymiarowych szeregów czasowych w celu przeprowadzenia zadań klasyfikacji lub regresji. Zaprezentowane rozwiązanie zostało opracowane w ramach konkursu eksploracji danych IJCRS’15 i – pokonując inne rozwiązania – zostało jego zwycięzcą
Knowledge engineering for bayesian networks: How common are noisy-MAX distributions in practice'
One problem faced in knowledge engineering for Bayesian networks (BNs) is the exponential growth of the number of parameters in their conditional probability tables (CPTs). The most common practical solution is the application of the so-called canonical gates and, among them, the noisy-OR (or their generalization, the noisy-MAX) gates, which take advantage of the independence of causal interactions and provide a logarithmic reduction of the number of parameters required to specify a CPT. In this paper, we propose an algorithm that fits a noisy-MAX distribution to an existing CPT, and we apply this algorithm to search for noisy-MAX gates in three existing practical BN models: ALARM, HAILFINDER, and HEPAR II. We show that the noisy-MAX gate provides a surprisingly good fit for as many as 50% of CPTs in two of these networks. We observed this in both distributions elicited from experts and those learned from data. The importance of this finding is that it provides an empirical justification for the use of the noisy-MAX gate as a powerful knowledge engineering tool. © 2013 IEEE
Coordination in rapidly evolving disaster response systems: The role of information
Assessing the changing dynamic between the demand that is placed on a community by cumulative exposure to hazards and the capacity of the community to mitigate or respond to that risk represents a central problem in estimating the community's resilience to disaster. The authors present an initial effort to simulate the dynamic between increasing demand and decreasing capacity in an actual disaster response system to determine the fragility of the system, or the point at which the system fails. The results show that access to core information enhances efficiency of response actions and increases coordination throughout the network of responding organizations.N
Learning parameters in canonical models using weighted least squares
We propose a novel approach to learning parameters of canonical models from small data sets using a concept employed in regression analysis: weighted least squares method. We assess the performance of our method experimentally and show that it typically outperforms simple methods used in the literature in terms of accuracy of the learned conditional probability distributions
Modeling information flow and fragility in rapidly evolving disaster response systems
Assessing the changing dynamic between demand that is placed upon-a community by cumulative exposure to hazards and capacity of that community to mitigate or respond to risk represents a central problem in estimating a community's resilience to disaster. We present an initial effort to simulate the dynamic between increasing demand and decreasing capacity in an actual disaster response system to determine the fragility of the system, or point at which the system fails. We construct a theoretical model of this process, and simulate the changing relationships, including in our model measures of magnitude of disaster, number of jurisdictions, and a simple type of cooperation to observe how these factors influence the efficiency of disaster operations. We focus not on the amount of information that is available to practicing managers, but on strategies for access to core information that enhances efficiency of information flow throughout the network of responding organizations.N