152 research outputs found

    Comparing families of dynamic causal models

    Get PDF
    Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data

    Inferring on the intentions of others by hierarchical Bayesian learning

    Get PDF
    Inferring on others' (potentially time-varying) intentions is a fundamental problem during many social transactions. To investigate the underlying mechanisms, we applied computational modeling to behavioral data from an economic game in which 16 pairs of volunteers (randomly assigned to "player" or "adviser" roles) interacted. The player performed a probabilistic reinforcement learning task, receiving information about a binary lottery from a visual pie chart. The adviser, who received more predictive information, issued an additional recommendation. Critically, the game was structured such that the adviser's incentives to provide helpful or misleading information varied in time. Using a meta-Bayesian modeling framework, we found that the players' behavior was best explained by the deployment of hierarchical learning: they inferred upon the volatility of the advisers' intentions in order to optimize their predictions about the validity of their advice. Beyond learning, volatility estimates also affected the trial-by-trial variability of decisions: participants were more likely to rely on their estimates of advice accuracy for making choices when they believed that the adviser's intentions were presently stable. Finally, our model of the players' inference predicted the players' interpersonal reactivity index (IRI) scores, explicit ratings of the advisers' helpfulness and the advisers' self-reports on their chosen strategy. Overall, our results suggest that humans (i) employ hierarchical generative models to infer on the changing intentions of others, (ii) use volatility estimates to inform decision-making in social interactions, and (iii) integrate estimates of advice accuracy with non-social sources of information. The Bayesian framework presented here can quantify individual differences in these mechanisms from simple behavioral readouts and may prove useful in future clinical studies of maladaptive social cognition

    Optimizing Experimental Design for Comparing Models of Brain Function

    Get PDF
    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work

    Computational neuroimaging strategies for single patient predictions

    Get PDF
    AbstractNeuroimaging increasingly exploits machine learning techniques in an attempt to achieve clinically relevant single-subject predictions. An alternative to machine learning, which tries to establish predictive links between features of the observed data and clinical variables, is the deployment of computational models for inferring on the (patho)physiological and cognitive mechanisms that generate behavioural and neuroimaging responses. This paper discusses the rationale behind a computational approach to neuroimaging-based single-subject inference, focusing on its potential for characterising disease mechanisms in individual subjects and mapping these characterisations to clinical predictions. Following an overview of two main approaches – Bayesian model selection and generative embedding – which can link computational models to individual predictions, we review how these methods accommodate heterogeneity in psychiatric and neurological spectrum disorders, help avoid erroneous interpretations of neuroimaging data, and establish a link between a mechanistic, model-based approach and the statistical perspectives afforded by machine learning

    Second waves, social distancing, and the spread of COVID-19 across the USA [version 2; peer review: 2 approved with reservations]

    Get PDF
    We recently described a dynamic causal model of a COVID-19 outbreak within a single region. Here, we combine several instantiations of this (epidemic) model to create a (pandemic) model of viral spread among regions. Our focus is on a second wave of new cases that may result from loss of immunity—and the exchange of people between regions—and how mortality rates can be ameliorated under different strategic responses. In particular, we consider hard or soft social distancing strategies predicated on national (Federal) or regional (State) estimates of the prevalence of infection in the population. The modelling is demonstrated using timeseries of new cases and deaths from the United States to estimate the parameters of a factorial (compartmental) epidemiological model of each State and, crucially, coupling between States. Using Bayesian model reduction, we identify the effective connectivity between States that best explains the initial phases of the outbreak in the United States. Using the ensuing posterior parameter estimates, we then evaluate the likely outcomes of different policies in terms of mortality, working days lost due to lockdown and demands upon critical care. The provisional results of this modelling suggest that social distancing and loss of immunity are the two key factors that underwrite a return to endemic equilibrium

    Second waves, social distancing, and the spread of COVID-19 across the USA [version 3; peer review: 1 approved, 1 approved with reservations]

    Get PDF
    We recently described a dynamic causal model of a COVID-19 outbreak within a single region. Here, we combine several instantiations of this (epidemic) model to create a (pandemic) model of viral spread among regions. Our focus is on a second wave of new cases that may result from loss of immunity—and the exchange of people between regions—and how mortality rates can be ameliorated under different strategic responses. In particular, we consider hard or soft social distancing strategies predicated on national (Federal) or regional (State) estimates of the prevalence of infection in the population. The modelling is demonstrated using timeseries of new cases and deaths from the United States to estimate the parameters of a factorial (compartmental) epidemiological model of each State and, crucially, coupling between States. Using Bayesian model reduction, we identify the effective connectivity between States that best explains the initial phases of the outbreak in the United States. Using the ensuing posterior parameter estimates, we then evaluate the likely outcomes of different policies in terms of mortality, working days lost due to lockdown and demands upon critical care. The provisional results of this modelling suggest that social distancing and loss of immunity are the two key factors that underwrite a return to endemic equilibrium

    Dynamic causal modelling of COVID-19 [version 1; peer review: 1 approved, 1 not approved]

    Get PDF
    This technical report describes a dynamic causal model of the spread of coronavirus through a population. The model is based upon ensemble or population dynamics that generate outcomes, like new cases and deaths over time. The purpose of this model is to quantify the uncertainty that attends predictions of relevant outcomes. By assuming suitable conditional dependencies, one can model the effects of interventions (e.g., social distancing) and differences among populations (e.g., herd immunity) to predict what might happen in different circumstances. Technically, this model leverages state-of-the-art variational (Bayesian) model inversion and comparison procedures, originally developed to characterise the responses of neuronal ensembles to perturbations. Here, this modelling is applied to epidemiological populations to illustrate the kind of inferences that are supported and how the model per se can be optimised given timeseries data. Although the purpose of this paper is to describe a modelling protocol, the results illustrate some interesting perspectives on the current pandemic; for example, the nonlinear effects of herd immunity that speak to a self-organised mitigation process

    Testing and tracking in the UK: A dynamic causal modelling study [version 2; peer review: 1 approved with reservations, 1 not approved]

    Get PDF
    By equipping a previously reported dynamic causal modelling of COVID-19 with an isolation state, we were able to model the effects of self-isolation consequent on testing and tracking. Specifically, we included a quarantine or isolation state occupied by people who believe they might be infected but are asymptomatic—and could only leave if they test negative. We recovered maximum posteriori estimates of the model parameters using time series of new cases, daily deaths, and tests for the UK. These parameters were used to simulate the trajectory of the outbreak in the UK over an 18-month period. Several clear-cut conclusions emerged from these simulations. For example, under plausible (graded) relaxations of social distancing, a rebound of infections is highly unlikely. The emergence of a second wave depends almost exclusively on the rate at which we lose immunity, inherited from the first wave. There exists no testing strategy that can attenuate mortality rates, other than by deferring or delaying a second wave. A testing and tracking policy—implemented at the present time—will defer any second wave beyond a time horizon of 18 months. Crucially, this deferment is within current testing capabilities (requiring an efficacy of tracing and tracking of about 20% of asymptomatic infected cases, with 50,000 tests per day). These conclusions are based upon a dynamic causal model for which we provide some construct and face validation—using a comparative analysis of the United Kingdom and Germany, supplemented with recent serological studies

    Effective immunity and second waves: a dynamic causal modelling study [version 1; peer review: 1 approved, 1 not approved]

    Get PDF
    This technical report addresses a pressing issue in the trajectory of the coronavirus outbreak; namely, the rate at which effective immunity is lost following the first wave of the pandemic. This is a crucial epidemiological parameter that speaks to both the consequences of relaxing lockdown and the propensity for a second wave of infections. Using a dynamic causal model of reported cases and deaths from multiple countries, we evaluated the evidence models of progressively longer periods of immunity. The results speak to an effective population immunity of about three months that, under the model, defers any second wave for approximately six months in most countries. This may have implications for the window of opportunity for tracking and tracing, as well as for developing vaccination programmes, and other therapeutic interventions

    Testing and tracking in the UK: A dynamic causal modelling study [version 1; peer review: 1 approved with reservations, 1 not approved]

    Get PDF
    By equipping a previously reported dynamic causal modelling of COVID-19 with an isolation state, we were able to model the effects of self-isolation consequent on testing and tracking. Specifically, we included a quarantine or isolation state occupied by people who believe they might be infected but are asymptomatic—and could only leave if they test negative. We recovered maximum posteriori estimates of the model parameters using time series of new cases, daily deaths, and tests for the UK. These parameters were used to simulate the trajectory of the outbreak in the UK over an 18-month period. Several clear-cut conclusions emerged from these simulations. For example, under plausible (graded) relaxations of social distancing, a rebound of infections is highly unlikely. The emergence of a second wave depends almost exclusively on the rate at which we lose immunity, inherited from the first wave. There exists no testing strategy that can attenuate mortality rates, other than by deferring or delaying a second wave. A testing and tracking policy—implemented at the present time—will defer any second wave beyond a time horizon of 18 months. Crucially, this deferment is within current testing capabilities (requiring an efficacy of tracing and tracking of about 20% of asymptomatic infected cases, with 50,000 tests per day). These conclusions are based upon a dynamic causal model for which we provide some construct and face validation—using a comparative analysis of the United Kingdom and Germany, supplemented with recent serological studies
    • …
    corecore