48,873 research outputs found

    bNEAT: a Bayesian network method for detecting epistatic interactions in genome-wide association studies.

    Get PDF
    Detecting epistatic interactions plays a significant role in improving pathogenesis, prevention, diagnosis and treatment of complex human diseases. A recent study in automatic detection of epistatic interactions shows that Markov Blanket-based methods are capable of finding genetic variants strongly associated with common diseases and reducing false positives when the number of instances is large. Unfortunately, a typical dataset from genome-wide association studies consists of very limited number of examples, where current methods including Markov Blanket-based method may perform poorly. RESULTS: To address small sample problems, we propose a Bayesian network-based approach (bNEAT) to detect epistatic interactions. The proposed method also employs a Branch-and-Bound technique for learning. We apply the proposed method to simulated datasets based on four disease models and a real dataset. Experimental results show that our method outperforms Markov Blanket-based methods and other commonly-used methods, especially when the number of samples is small. CONCLUSIONS: Our results show bNEAT can obtain a strong power regardless of the number of samples and is especially suitable for detecting epistatic interactions with slight or no marginal effects. The merits of the proposed approach lie in two aspects: a suitable score for Bayesian network structure learning that can reflect higher-order epistatic interactions and a heuristic Bayesian network structure learning method

    Learning the Structure of Deep Sparse Graphical Models

    Full text link
    Deep belief networks are a powerful way to model complex probability distributions. However, learning the structure of a belief network, particularly one with hidden units, is difficult. The Indian buffet process has been used as a nonparametric Bayesian prior on the directed structure of a belief network with a single infinitely wide hidden layer. In this paper, we introduce the cascading Indian buffet process (CIBP), which provides a nonparametric prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network so each unit can additionally vary its behavior between discrete and continuous representations. We provide Markov chain Monte Carlo algorithms for inference in these belief networks and explore the structures learned on several image data sets.Comment: 20 pages, 6 figures, AISTATS 2010, Revise

    Improving Structure MCMC for Bayesian Networks through Markov Blanket Resampling

    Get PDF
    Algorithms for inferring the structure of Bayesian networks from data have become an increasingly popular method for uncovering the direct and indirect influences among variables in complex systems. A Bayesian approach to structure learning uses posterior probabilities to quantify the strength with which the data and prior knowledge jointly support each possible graph feature. Existing Markov Chain Monte Carlo (MCMC) algorithms for estimating these posterior probabilities are slow in mixing and convergence, especially for large networks. We present a novel Markov blanket resampling (MBR) scheme that intermittently reconstructs the Markov blanket of nodes, thus allowing the sampler to more effectively traverse low-probability regions between local maxima. As we can derive the complementary forward and backward directions of the MBR proposal distribution, the Metropolis-Hastings algorithm can be used to account for any asymmetries in these proposals. Experiments across a range of network sizes show that the MBR scheme outperforms other state-of-the-art algorithms, both in terms of learning performance and convergence rate. In particular, MBR achieves better learning performance than the other algorithms when the number of observations is relatively small and faster convergence when the number of variables in the network is large

    Using node ordering to improve Structure MCMC for Bayesian Model Averaging

    Get PDF
    In this thesis, I address an important problem of estimating the structure of Bayesian network models using Bayesian model averaging approach. Bayesian networks are probabilistic graphical models which are widely used for probabilistic inference and causal modeling. Learning the structure of Bayesian networks can reveal insights into the causal structure of the underlying domain. Owing to the super exponential structure space, it is a challenging task to find the most suitable network model that explains the data. The problem is worsened when the amount of available data is modest, as there might be numerous models with non negligible posterior. Therefore, we are interested in the calculation of posterior of a feature like presence of an edge from one particular node to another or a particular set being a parent of a specific node. The contribution of this thesis includes a Markov Chain Monte Carlo simulation approach to sample network structures from a posterior and then using Bayesian model averaging approach to estimate the posterior of various features
    • …
    corecore