79 research outputs found

    If God does not explain parsimony, what does ?

    Get PDF
    Although many scholars take parsimony for granted today, Elliott Sober shows in his latest book, Ockham’s Razors, that they might not be rationally justified to do so. In particular, he claims that the famous Ockham’s Razor, the heuristic that says one should not postulate more entities than necessary, rests on some implicit assumptions that go back to Newton and his rules of reasoning. The problem is that Newton justified those basic rules on theological grounds, that is, the world is parsimonious because God is orderly. All is not lost : Sober suggests that two contemporary perspectives from probability theory do justify parsimony. The first one is related to Bayesianism, and the fact that Ockham’s Razor is embedded in Bayes’ theorem. Sober criticizes this view and argues for an alternative, one in which predictive accuracy is more fundamental. I suggest that Sober might be right about the unseen role of predictive accuracy, but that this does not entail that Bayesians should adhere to Sober’s framework. It is my contention that Sober’s case against Bayesian model selection has more to do with the Bayesian worldview than the methodology per se

    Detection of a slow-flow component in contrast-enhanced ultrasound of the synovia for the differential diagnosis of arthritis

    Get PDF
    Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity, that can be useful in the quantification of different perfusion patterns. This can particularly important in the early detection and differentiation of different types of arthritis. A Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. However, in some cases the heterogeneity of the kinetics can be such that even the Gamma model does not properly describe the curve, especially in presence of recirculation or of an additional slowflow component. In this work we apply to CEUS data both the Gamma-variate and the single compartment recirculation model (SCR) which takes explicitly into account an additional component of slow flow. The models are solved within a Bayesian framework. We also employed the perfusion estimates obtained with SCR to train a support vector machine classifier to distinguish different types of arthritis. When dividing the patients into two groups (rheumatoid arthritis and polyarticular RA-like psoriatic arthritis vs. other arthritis types), the slow component amplitude was significantly different across groups: mean values of a1 and its variability were statistically higher in RA and RA-like patients (131% increase in mean, p = 0.035 and 73% increase in standard deviation, p = 0.049 respectively). The SVM classifier achieved a balanced accuracy of 89%, with a sensitivity of 100% and a specificity of 78%. © 2017 SPIE

    Taming the shrewdness of neural function: Methodological challenges in computational psychiatry

    Get PDF
    Computational psychiatry involves applying a collection of theoretical notions, including data analysis and mathematical and computational modeling, to the problems of psychiatry. It is a nascent field whose central methods are just in the process of being developed. We consider some of the challenges and opportunities for techniques and approaches that are presenting themselves as it starts to take on a more concrete form

    Effective immunity and second waves: a dynamic causal modelling study [version 1; peer review: 1 approved, 1 not approved]

    Get PDF
    This technical report addresses a pressing issue in the trajectory of the coronavirus outbreak; namely, the rate at which effective immunity is lost following the first wave of the pandemic. This is a crucial epidemiological parameter that speaks to both the consequences of relaxing lockdown and the propensity for a second wave of infections. Using a dynamic causal model of reported cases and deaths from multiple countries, we evaluated the evidence models of progressively longer periods of immunity. The results speak to an effective population immunity of about three months that, under the model, defers any second wave for approximately six months in most countries. This may have implications for the window of opportunity for tracking and tracing, as well as for developing vaccination programmes, and other therapeutic interventions

    Analysing connectivity with Granger causality and dynamic causal modelling

    Get PDF
    This review considers state-of-the-art analyses of functional integration in neuronal macrocircuits. We focus on detecting and estimating directed connectivity in neuronal networks using Granger causality (GC) and dynamic causal modelling (DCM). These approaches are considered in the context of functional segregation and integration and — within functional integration — the distinction between functional and effective connectivity. We review recent developments that have enjoyed a rapid uptake in the discovery and quantification of functional brain architectures. GC and DCM have distinct and complementary ambitions that are usefully considered in relation to the detection of functional connectivity and the identification of models of effective connectivity. We highlight the basic ideas upon which they are grounded, provide a comparative evaluation and point to some outstanding issues

    A Primer on Variational Laplace (VL)

    Get PDF
    This article details a scheme for approximate Bayesian inference, which has underpinned thousands of neuroimaging studies since its introduction 15 years ago. Variational Laplace (VL) provides a generic approach to fitting linear or non-linear models, which may be static or dynamic, returning a posterior probability density over the model parameters and an approximation of log model evidence, which enables Bayesian model comparison. VL applies variational Bayesian inference in conjunction with quadratic or Laplace approximations of the evidence lower bound (free energy). Importantly, update equations do not need to be derived for each model under consideration, providing a general method for fitting a broad class of models. This primer is intended for experimenters and modellers who may wish to fit models to data using variational Bayesian methods, without assuming previous experience of variational Bayes or machine learning. Accompanying code demonstrates how to fit different kinds of model using the reference implementation of the VL scheme in the open-source Statistical Parametric Mapping (SPM) software package. In addition, we provide a standalone software function that does not require SPM, in order to ease translation to other fields, together with detailed pseudocode. Finally, the supplementary materials provide worked derivations of the key equations

    Dynamic causal modelling of COVID-19 and its mitigations

    Get PDF
    This technical report describes the dynamic causal modelling of mitigated epidemiological outcomes during the COVID-9 coronavirus outbreak in 2020. Dynamic causal modelling is a form of complex system modelling, which uses 'real world' timeseries to estimate the parameters of an underlying state space model using variational Bayesian procedures. Its key contribution-in an epidemiological setting-is to embed conventional models within a larger model of sociobehavioural responses-in a way that allows for (relatively assumption-free) forecasting. One advantage of using variational Bayes is that one can progressively optimise the model via Bayesian model selection: generally, the most likely models become more expressive as more data becomes available. This report summarises the model (on 6-Nov-20), eight months after the inception of dynamic causal modelling for COVID-19. This model-and its subsequent updates-is used to provide nowcasts and forecasts of latent behavioural and epidemiological variables as an open science resource. The current report describes the underlying model structure and the rationale for the variational procedures that underwrite Bayesian model selection

    A tutorial on group effective connectivity analysis, part 2: second level analysis with PEB

    Get PDF
    This tutorial provides a worked example of using Dynamic Causal Modelling (DCM) and Parametric Empirical Bayes (PEB) to characterise inter-subject variability in neural circuitry (effective connectivity). This involves specifying a hierarchical model with two or more levels. At the first level, state space models (DCMs) are used to infer the effective connectivity that best explains a subject's neuroimaging timeseries (e.g. fMRI, MEG, EEG). Subject-specific connectivity parameters are then taken to the group level, where they are modelled using a General Linear Model (GLM) that partitions between-subject variability into designed effects and additive random effects. The ensuing (Bayesian) hierarchical model conveys both the estimated connection strengths and their uncertainty (i.e., posterior covariance) from the subject to the group level; enabling hypotheses to be tested about the commonalities and differences across subjects. This approach can also finesse parameter estimation at the subject level, by using the group-level parameters as empirical priors. We walk through this approach in detail, using data from a published fMRI experiment that characterised individual differences in hemispheric lateralization in a semantic processing task. The preliminary subject specific DCM analysis is covered in detail in a companion paper. This tutorial is accompanied by the example dataset and step-by-step instructions to reproduce the analyses

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems

    A review of fMRI simulation studies

    Get PDF
    Simulation studies that validate statistical techniques for fMRI data are challenging due to the complexity of the data. Therefore, it is not surprising that no common data generating process is available (i.e. several models can be found to model BOLD activation and noise). Based on a literature search, a database of simulation studies was compiled. The information in this database was analysed and critically evaluated focusing on the parameters in the simulation design, the adopted model to generate fMRI data, and on how the simulation studies are reported. Our literature analysis demonstrates that many fMRI simulation studies do not report a thorough experimental design and almost consistently ignore crucial knowledge on how fMRI data are acquired. Advice is provided on how the quality of fMRI simulation studies can be improved
    • …
    corecore