7 research outputs found

    PCSI-labeled Directed Acyclic Graphs

    Get PDF
    This thesis proposes a generalization for the model class of labeled directed acyclic graphs (LDAGs) introduced in Pensar et al. (2013), which themselves are a generalization of ordinary Bayesian networks. LDAGs allow encoding of a more refined dependency structure compared to Bayesian networks with a single DAG augmented with labels. The labels correspond to context-specific independencies (CSIs) which must be present in every parameterization of an LDAG. The generalization of LDAGs developed in this thesis allows placement of partial context-specific independencies (PCSIs) into labels of an LDAG model, further increasing the space of encodable dependency structures. PCSIs themselves allow a set of random variables to be independent of another when restricted to a subset of their outcome space. The generalized model class is named PCSI-labeled directed acyclic graph (PLDAG). Several properties of PLDAGs are studied, including PCSI-equivalence of two distinct models, which corresponds to Markov-equivalence of ordinary DAGs. The efficient structure learning algorithm introduced for LDAGs is extended to learn PLDAG models. This algorithm uses a non-reversible Markov chain Monte Carlo (MCMC) method for ordinary DAG structure learning combined with a greedy hill climbing approach. The performance of PLDAG learning is compared against LDAG and traditional DAG learning using three different measures: Kullback-Leibler divergence, number of free parameters in the model and the correctness of the learned DAG structure. The results show that PLDAGs further decreased the number of free parameters needed in the learned model compared to LDAGs yet maintaining the same level of performance with respect to Kullback-Leibler divergence. Also PLDAG and LDAG structure learning algorithms were able to learn the correct DAG structure with less data in traditional DAG structure learning task compared to the base MCMC algorithm

    Computation of context as a cognitive tool

    Get PDF
    In the field of cognitive science, as well as the area of Artificial Intelligence (AI), the role of context has been investigated in many forms, and for many purposes. It is clear in both areas that consideration of contextual information is important. However, the significance of context has not been emphasized in the Bayesian networks literature. We suggest that consideration of context is necessary for acquiring knowledge about a situation and for refining current representational models that are potentially erroneous due to hidden independencies in the data.In this thesis, we make several contributions towards the automation of contextual consideration by discovering useful contexts from probability distributions. We show how context-specific independencies in Bayesian networks and discovery algorithms, traditionally used for efficient probabilistic inference can contribute to the identification of contexts, and in turn can provide insight on otherwise puzzling situations. Also, consideration of context can help clarify otherwise counter intuitive puzzles, such as those that result in instances of Simpson's paradox. In the social sciences, the branch of attribution theory is context-sensitive. We suggest a method to distinguish between dispositional causes and situational factors by means of contextual models. Finally, we address the work of Cheng and Novick dealing with causal attribution by human adults. Their probabilistic contrast model makes use of contextual information, called focal sets, that must be determined by a human expert. We suggest a method for discovering complete focal sets from probabilistic distributions, without the human expert

    Generalized belief change with imprecise probabilities and graphical models

    Get PDF
    We provide a theoretical investigation of probabilistic belief revision in complex frameworks, under extended conditions of uncertainty, inconsistency and imprecision. We motivate our kinematical approach by specializing our discussion to probabilistic reasoning with graphical models, whose modular representation allows for efficient inference. Most results in this direction are derived from the relevant work of Chan and Darwiche (2005), that first proved the inter-reducibility of virtual and probabilistic evidence. Such forms of information, deeply distinct in their meaning, are extended to the conditional and imprecise frameworks, allowing further generalizations, e.g. to experts' qualitative assessments. Belief aggregation and iterated revision of a rational agent's belief are also explored

    Handbook of Mathematical Geosciences

    Get PDF
    This Open Access handbook published at the IAMG's 50th anniversary, presents a compilation of invited path-breaking research contributions by award-winning geoscientists who have been instrumental in shaping the IAMG. It contains 45 chapters that are categorized broadly into five parts (i) theory, (ii) general applications, (iii) exploration and resource estimation, (iv) reviews, and (v) reminiscences covering related topics like mathematical geosciences, mathematical morphology, geostatistics, fractals and multifractals, spatial statistics, multipoint geostatistics, compositional data analysis, informatics, geocomputation, numerical methods, and chaos theory in the geosciences

    Contextual Weak Independence in Bayesian networks

    No full text
    It is well-known that the notion of (strong) conditional independence (CI) is too restrictive to capture independencies that only hold in certain contexts. This kind of contextual independency, called context-strong independence (CSI), can be used to facilitate the acquisition, representation, and inference of probabilistic knowledge. In this paper, we suggest the use of contextual weak independence (CWI) in Bayesian networks. It should be emphasized that the notion of CWI is a more general form of contextual independence than CSI. Furthermore, if the contextual strong independence holds for all contexts, then the notion of CSI becomes strong CI. On the other hand, if the weak contextual independence holds for all contexts, then the notion of CWI becomes weak independence (WI) which is a more general noncontextual independency than strong CI. More importantly, complete axiomatizations are studied for both the class of WI and the class of CI and WI together. Finally, the interesting property of WI being a necessary and sufficient condition for ensuring consistency in granular probabilistic networks is shown.
    corecore