395 research outputs found

    Uniform random generation of large acyclic digraphs

    Full text link
    Directed acyclic graphs are the basic representation of the structure underlying Bayesian networks, which represent multivariate probability distributions. In many practical applications, such as the reverse engineering of gene regulatory networks, not only the estimation of model parameters but the reconstruction of the structure itself is of great interest. As well as for the assessment of different structure learning algorithms in simulation studies, a uniform sample from the space of directed acyclic graphs is required to evaluate the prevalence of certain structural features. Here we analyse how to sample acyclic digraphs uniformly at random through recursive enumeration, an approach previously thought too computationally involved. Based on complexity considerations, we discuss in particular how the enumeration directly provides an exact method, which avoids the convergence issues of the alternative Markov chain methods and is actually computationally much faster. The limiting behaviour of the distribution of acyclic digraphs then allows us to sample arbitrarily large graphs. Building on the ideas of recursive enumeration based sampling we also introduce a novel hybrid Markov chain with much faster convergence than current alternatives while still being easy to adapt to various restrictions. Finally we discuss how to include such restrictions in the combinatorial enumeration and the new hybrid Markov chain method for efficient uniform sampling of the corresponding graphs.Comment: 15 pages, 2 figures. To appear in Statistics and Computin

    Hierarchical Models in the Brain

    Get PDF
    This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain

    Computational neuroimaging strategies for single patient predictions

    Get PDF
    AbstractNeuroimaging increasingly exploits machine learning techniques in an attempt to achieve clinically relevant single-subject predictions. An alternative to machine learning, which tries to establish predictive links between features of the observed data and clinical variables, is the deployment of computational models for inferring on the (patho)physiological and cognitive mechanisms that generate behavioural and neuroimaging responses. This paper discusses the rationale behind a computational approach to neuroimaging-based single-subject inference, focusing on its potential for characterising disease mechanisms in individual subjects and mapping these characterisations to clinical predictions. Following an overview of two main approaches – Bayesian model selection and generative embedding – which can link computational models to individual predictions, we review how these methods accommodate heterogeneity in psychiatric and neurological spectrum disorders, help avoid erroneous interpretations of neuroimaging data, and establish a link between a mechanistic, model-based approach and the statistical perspectives afforded by machine learning

    A survey of Bayesian Network structure learning

    Get PDF

    Expert judgement for dependence in probabilistic modelling : a systematic literature review and future research directions

    Get PDF
    Many applications in decision making under uncertainty and probabilistic risk assessment require the assessment of mul- tiple, dependent uncertain quantities, so that in addition to marginal distributions, interdependence needs to be modelled in order to properly understand the overall risk. Nevertheless, relevant historical data on dependence information are often not available or simply too costly to obtain. In this case, the only sensible option is to elicit this uncertainty through the use of expert judgements. In expert judgement studies, a structured approach to eliciting variables of interest is desirable so that their assessment is methodologically robust. One of the key decisions during the elicitation process is the form in which the uncertainties are elicited. This choice is subject to various, potentially con icting, desiderata related to e.g. modelling convenience, coherence between elicitation parameters and the model, combining judgements, and the assessment burden for the experts. While extensive and systematic guidance to address these considerations exists for single variable uncertainty elicitation, for higher dimensions very little such guidance is available. Therefore this paper o ers a systematic review of the current literature on eliciting dependence. The literature on the elicitation of dependence parameters such as correlations is presented alongside commonly used dependence models and experience from case studies. From this, guidance about the strategy for dependence assessment is given and gaps in the existing research are identi ed to determine future directions for structured methods to elicit dependence

    Dynamic models of brain imaging data and their Bayesian inversion

    Get PDF
    This work is about understanding the dynamics of neuronal systems, in particular with respect to brain connectivity. It addresses complex neuronal systems by looking at neuronal interactions and their causal relations. These systems are characterized using a generic approach to dynamical system analysis of brain signals - dynamic causal modelling (DCM). DCM is a technique for inferring directed connectivity among brain regions, which distinguishes between a neuronal and an observation level. DCM is a natural extension of the convolution models used in the standard analysis of neuroimaging data. This thesis develops biologically constrained and plausible models, informed by anatomic and physiological principles. Within this framework, it uses mathematical formalisms of neural mass, mean-field and ensemble dynamic causal models as generative models for observed neuronal activity. These models allow for the evaluation of intrinsic neuronal connections and high-order statistics of neuronal states, using Bayesian estimation and inference. Critically it employs Bayesian model selection (BMS) to discover the best among several equally plausible models. In the first part of this thesis, a two-state DCM for functional magnetic resonance imaging (fMRI) is described, where each region can model selective changes in both extrinsic and intrinsic connectivity. The second part is concerned with how the sigmoid activation function of neural-mass models (NMM) can be understood in terms of the variance or dispersion of neuronal states. The third part presents a mean-field model (MFM) for neuronal dynamics as observed with magneto- and electroencephalographic data (M/EEG). In the final part, the MFM is used as a generative model in a DCM for M/EEG and compared to the NMM using Bayesian model selection
    • …
    corecore