17 research outputs found

    Multi-subject analyses with dynamic causal modeling

    Full text link
    Currently, most studies that employ dynamic causal modeling (DCM) use random-effects (RFX) analysis to make group inferences, applying a second-level frequentist test to subjects' parameter estimates. In some instances, however, fixed-effects (FFX) analysis can be more appropriate. Such analyses can be implemented by combining the subjects' posterior densities according to Bayes' theorem either on a multivariate (Bayesian parameter averaging or BPA) or univariate basis (posterior variance weighted averaging or PVWA), or by applying DCM to time-series averaged across subjects beforehand (temporal averaging or TA). While all these FFX approaches have the advantage of allowing for Bayesian inferences on parameters a systematic comparison of their statistical properties has been lacking so far. Based on simulated data generated from a two-region network we examined the effects of signal-to-noise ratio (SNR) and population heterogeneity on group-level parameter estimates. Data sets were simulated assuming either a homogeneous large population (N=60) with constant connectivities across subjects or a heterogeneous population with varying parameters. TA showed advantages at lower SNR but is limited in its applicability. Because BPA and PVWA take into account posterior (co)variance structure, they can yield non-intuitive results when only considering posterior means. This problem is relevant for high SNR data, pronounced parameter interdependencies and when FFX assumptions are violated (i.e. inhomogeneous groups). It diminishes with decreasing SNR and is absent for models with independent parameters or when FFX assumptions are appropriate. Group results obtained with these FFX approaches should therefore be interpreted carefully by considering estimates of dependencies among model parameters

    Ten simple rules for dynamic causal modeling

    Get PDF
    Dynamic causal modeling (DCM) is a generic Bayesian framework for inferring hidden neuronal states from measurements of brain activity. It provides posterior estimates of neurobiologically interpretable quantities such as the effective strength of synaptic connections among neuronal populations and their context-dependent modulation. DCM is increasingly used in the analysis of a wide range of neuroimaging and electrophysiological data. Given the relative complexity of DCM, compared to conventional analysis techniques, a good knowledge of its theoretical foundations is needed to avoid pitfalls in its application and interpretation of results. By providing good practice recommendations for DCM, in the form of ten simple rules, we hope that this article serves as a helpful tutorial for the growing community of DCM users
    corecore