56,917 research outputs found

    Qualitative inequalities for squared partial correlations of a Gaussian random vector

    Full text link
    We describe various sets of conditional independence relationships, sufficient for qualitatively comparing non-vanishing squared partial correlations of a Gaussian random vector. These sufficient conditions are satisfied by several graphical Markov models. Rules for comparing degree of association among the vertices of such Gaussian graphical models are also developed. We apply these rules to compare conditional dependencies on Gaussian trees. In particular for trees, we show that such dependence can be completely characterized by the length of the paths joining the dependent vertices to each other and to the vertices conditioned on. We also apply our results to postulate rules for model selection for polytree models. Our rules apply to mutual information of Gaussian random vectors as well.Comment: 21 pages, 13 figure

    Low-Order Conditional Independence Graphs for Inferring Genetic Networks

    Get PDF
    As a powerful tool for analyzing full conditional (in-)dependencies between random variables, graphical models have become increasingly popular to infer genetic networks based on gene expression data. However, full (unconstrained) conditional relationships between random variables can be only estimated accurately if the number of observations is relatively large in comparison to the number of variables, which is usually not fulfilled for high-throughput genomic data.Recently, simplified graphical modeling approaches have been proposed to determine dependencies between gene expression profiles. For sparse graphical models such as genetic networks, it is assumed that the zero- and first-order conditional independencies still reflect reasonably well the full conditional independence structure between variables. Moreover, low-order conditional independencies have the advantage that they can be accurately estimated even when having only a small number of observations. Therefore, using only zero- and first-order conditional dependencies to infer the complete graphical model can be very useful. Here, we analyze the statistical and probabilistic properties of these low-order conditional independence graphs (called 0-1 graphs). We find that for faithful graphical models, the 0-1 graph contains at least all edges of the full conditional independence graph (concentration graph). For simple structures such as Markov trees, the 0-1 graph even coincides with the concentration graph. Furthermore, we present some asymptotic results and we demonstrate in a simulation study that despite their simplicity, 0-1 graphs are generally good estimators of sparse graphical models. Finally, the biological relevance of some applications is summarize

    A Model of Jury Decisions Where All Jurors Have the Same Evidence

    Get PDF
    In the classical Condorcet jury model, different jurors' votes are independent random variables, where each juror has the same probability p>1/2 of voting for the correct alternative. The probability that the correct alternative will win under majority voting converges to 1 as the number of jurors increases. Hence the probability of an incorrect majority vote can be made arbitrarily small, a result that may seem unrealistic. A more realistic model is obtained by relaxing the assumption of independence and relating the vote of every juror to the same "body of evidence". In terms of Bayesian trees, the votes are direct descendants not of the true state of the world ('guilty' or 'not guilty'), but of the "body of evidence", which in turn is a direct descendant of the true state of the world. This permits the possibility of a misleading body of evidence. Our main theorem shows that the probability that the correct alternative will win under majority voting converges to the probability that the body of evidence is not misleading, which may be strictly less than 1.Condorcet jury theorem, conditional independence, interpretation of evidence, Bayesian trees

    Epistemic irrelevance in credal nets: the case of imprecise Markov trees

    Get PDF
    We focus on credal nets, which are graphical models that generalise Bayesian nets to imprecise probability. We replace the notion of strong independence commonly used in credal nets with the weaker notion of epistemic irrelevance, which is arguably more suited for a behavioural theory of probability. Focusing on directed trees, we show how to combine the given local uncertainty models in the nodes of the graph into a global model, and we use this to construct and justify an exact message-passing algorithm that computes updated beliefs for a variable in the tree. The algorithm, which is linear in the number of nodes, is formulated entirely in terms of coherent lower previsions, and is shown to satisfy a number of rationality requirements. We supply examples of the algorithm's operation, and report an application to on-line character recognition that illustrates the advantages of our approach for prediction. We comment on the perspectives, opened by the availability, for the first time, of a truly efficient algorithm based on epistemic irrelevance.Comment: 29 pages, 5 figures, 1 tabl
    corecore