2,488 research outputs found

    Calibration of ionic and cellular cardiac electrophysiology models

    Get PDF
    © 2020 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc. Cardiac electrophysiology models are among the most mature and well-studied mathematical models of biological systems. This maturity is bringing new challenges as models are being used increasingly to make quantitative rather than qualitative predictions. As such, calibrating the parameters within ion current and action potential (AP) models to experimental data sets is a crucial step in constructing a predictive model. This review highlights some of the fundamental concepts in cardiac model calibration and is intended to be readily understood by computational and mathematical modelers working in other fields of biology. We discuss the classic and latest approaches to calibration in the electrophysiology field, at both the ion channel and cellular AP scales. We end with a discussion of the many challenges that work to date has raised and the need for reproducible descriptions of the calibration process to enable models to be recalibrated to new data sets and built upon for new studies. This article is categorized under: Analytical and Computational Methods > Computational Methods Physiology > Mammalian Physiology in Health and Disease Models of Systems Properties and Processes > Cellular Models

    Considering discrepancy when calibrating a mechanistic electrophysiology model

    Get PDF
    Uncertainty quantification (UQ) is a vital step in using mathematical models and simulations to take decisions. The field of cardiac simulation has begun to explore and adopt UQ methods to characterize uncertainty in model inputs and how that propagates through to outputs or predictions; examples of this can be seen in the papers of this issue. In this review and perspective piece, we draw attention to an important and under-addressed source of uncertainty in our predictions—that of uncertainty in the model structure or the equations themselves. The difference between imperfect models and reality is termed model discrepancy, and we are often uncertain as to the size and consequences of this discrepancy. Here, we provide two examples of the consequences of discrepancy when calibrating models at the ion channel and action potential scales. Furthermore, we attempt to account for this discrepancy when calibrating and validating an ion channel model using different methods, based on modelling the discrepancy using Gaussian processes and autoregressive-moving-average models, then highlight the advantages and shortcomings of each approach. Finally, suggestions and lines of enquiry for future work are provided. This article is part of the theme issue ‘Uncertainty quantification in cardiac and cardiovascular modelling and simulation’

    Bayesian inference and prediction in cardiac electrophysiology models with an application to representing variability

    Get PDF
    Many different techniques have been used for parameter estimation in cardiac electrophysiology models, from optimization algorithms to heuristic and frequentist statistical methods. However, the fixed parameter values obtained from such approaches cannot provide a complete description of variability within an individual or across a population. To overcome this shortcoming, in this work we adopt a Bayesian approach by applying the Hamiltonian Monte Carlo (HMC) algorithm to cardiac electrophysiology models and data for the first time through three studies. (i) Using HMC, we fit synthetic and experimental cardiac voltage data from different pacing rates and find the probability distributions of the parameters of two relatively low-dimensional models, the Mitchell-Schaeffer (MS) and Fenton-Karma (FK) models. We successfully fit synthetic and experimental voltage traces and build populations of action potentials with the posterior probability distributions of the parameters. (ii) We compare the performance of HMC with that of the main Bayesian approach used previously for similar applications, the Approximate Bayesian Computation Sequential Monte Carlo (ABC SMC) algorithm. Both techniques are able to describe the dynamics of synthetic and experimental voltage data using the MS and FK models, with HMC more consistent and ABC SMC more versatile and easier to implement. (iii) We study the variability of cardiac action potentials in space within an individual. We use HMC and a novel approach employing a Gaussian process prior for one spatially varying MS model parameter along with a hierarchical model for the remaining parameters, considered spatially invariant. Using this approach, we do inference and prediction on synthetic cardiac voltage data, exploiting the spatial correlations in cardiac tissue that arise from cellular coupling to use voltage information from a small number of sites to predict parameter value distributions and families of voltage data in other locations. Together these three studies show the potential of Bayesian inference and prediction in providing a framework to represent variability within cardiac electrophysiology modeling

    Employing Gaussian process priors for studying spatial variation in the parameters of a cardiac action potential model

    Full text link
    Cardiac cells exhibit variability in the shape and duration of their action potentials in space within a single individual. To create a mathematical model of cardiac action potentials (AP) which captures this spatial variability and also allows for rigorous uncertainty quantification regarding within-tissue spatial correlation structure, we developed a novel hierarchical Bayesian model making use of a latent Gaussian process prior on the parameters of a simplified cardiac AP model which is used to map forcing behavior to observed voltage signals. This model allows for prediction of cardiac electrophysiological dynamics at new points in space and also allows for reconstruction of surface electrical dynamics with a relatively small number of spatial observation points. Furthermore, we make use of Markov chain Monte Carlo methods via the Stan modeling framework for parameter estimation. We employ a synthetic data case study oriented around the reconstruction of a sparsely-observed spatial parameter surface to highlight how this approach can be used for spatial or spatiotemporal analyses of cardiac electrophysiology

    Identification of Parameters describing Phenomenological Cardiac Action Potential Models using Sigma-Point Methods

    Get PDF
    Phenomenological action potential (AP) models allow reproducing characteristic features of cardiomyocytes’ electrical activity without fully describing the underlying biophysics, thus being very useful for whole-heart electrophysiological simulations. Methods to identify the parameter values of phenomenological models commonly attempt to reproduce specific AP properties rather than the whole AP waveform. In this work we propose the use of a sequential estimation approach based on sigma-point filters to adjust such parameters. The proposed methodology has been tested in estimating the parameters of the phenomenological Bueno-Cherry-Fenton model to replicate the APs generated with in silico models as well as experimentally measured APs. With the new method the whole AP waveforms can be reproduced more accurately than with previous parameter fitting methods and the AP duration restitution curves are in better agreement with available experimental data

    Modelling discrepancy in Bayesian calibration of reservoir models

    Get PDF
    Simulation models of physical systems such as oil field reservoirs are subject to numerous uncertainties such as observation errors and inaccurate initial and boundary conditions. However, after accounting for these uncertainties, it is usually observed that the mismatch between the simulator output and the observations remains and the model is still inadequate. This incapability of computer models to reproduce the real-life processes is referred to as model inadequacy. This thesis presents a comprehensive framework for modelling discrepancy in the Bayesian calibration and probabilistic forecasting of reservoir models. The framework efficiently implements data-driven approaches to handle uncertainty caused by ignoring the modelling discrepancy in reservoir predictions using two major hierarchical strategies, parametric and non-parametric hierarchical models. The central focus of this thesis is on an appropriate way of modelling discrepancy and the importance of the model selection in controlling overfitting rather than different solutions to different noise models. The thesis employs a model selection code to obtain the best candidate solutions to the form of non-parametric error models. This enables us to, first, interpolate the error in history period and, second, propagate it towards unseen data (i.e. error generalisation). The error models constructed by inferring parameters of selected models can predict the response variable (e.g. oil rate) at any point in input space (e.g. time) with corresponding generalisation uncertainty. In the real field applications, the error models reliably track down the uncertainty regardless of the type of the sampling method and achieve a better model prediction score compared to the models that ignore discrepancy. All the case studies confirm the enhancement of field variables prediction when the discrepancy is modelled. As for the model parameters, hierarchical error models render less global bias concerning the reference case. However, in the considered case studies, the evidence for better prediction of each of the model parameters by error modelling is inconclusive

    Computational psychiatry: from synapses to sentience

    Get PDF
    This review considers computational psychiatry from a particular viewpoint: namely, a commitment to explaining psychopathology in terms of pathophysiology. It rests on the notion of a generative model as underwriting (i) sentient processing in the brain, and (ii) the scientific process in psychiatry. The story starts with a view of the brain-from cognitive and computational neuroscience-as an organ of inference and prediction. This offers a formal description of neuronal message passing, distributed processing and belief propagation in neuronal networks; and how certain kinds of dysconnection lead to aberrant belief updating and false inference. The dysconnections in question can be read as a pernicious synaptopathy that fits comfortably with formal notions of how we-or our brains-encode uncertainty or its complement, precision. It then considers how the ensuing process theories are tested empirically, with an emphasis on the computational modelling of neuronal circuits and synaptic gain control that mediates attentional set, active inference, learning and planning. The opportunities afforded by this sort of modelling are considered in light of in silico experiments; namely, computational neuropsychology, computational phenotyping and the promises of a computational nosology for psychiatry. The resulting survey of computational approaches is not scholarly or exhaustive. Rather, its aim is to review a theoretical narrative that is emerging across subdisciplines within psychiatry and empirical scales of investigation. These range from epilepsy research to neurodegenerative disorders; from post-traumatic stress disorder to the management of chronic pain, from schizophrenia to functional medical symptoms

    Recognising high-level agent behaviour through observations in data scarce domains

    Get PDF
    This thesis presents a novel method for performing multi-agent behaviour recognition without requiring large training corpora. The reduced need for data means that robust probabilistic recognition can be performed within domains where annotated datasets are traditionally unavailable (e.g. surveillance, defence). Human behaviours are composed from sequences of underlying activities that can be used as salient features. We do not assume that the exact temporal ordering of such features is necessary, so can represent behaviours using an unordered “bag-of-features”. A weak temporal ordering is imposed during inference to match behaviours to observations and replaces the learnt model parameters used by competing methods. Our three-tier architecture comprises low-level video tracking, event analysis and high-level inference. High-level inference is performed using a new, cascading extension of the Rao-Blackwellised Particle Filter. Behaviours are recognised at multiple levels of abstraction and can contain a mixture of solo and multiagent behaviour. We validate our framework using the PETS 2006 video surveillance dataset and our own video sequences, in addition to a large corpus of simulated data. We achieve a mean recognition precision of 96.4% on the simulated data and 89.3% on the combined video data. Our “bag-of-features” framework is able to detect when behaviours terminate and accurately explains agent behaviour despite significant quantities of low-level classification errors in the input, and can even detect agents who change their behaviour

    Model-based analyses of choice and eye movement data

    Get PDF
    How do humans make simple preferential decisions, like decide what to have for breakfast at a hotel buffet? In contrast to the assumption of normative accounts of decision making, humans' preferences are often not stable but constructed at the time of choice and contingent on the decision makers' interaction with the environment. Recent evidence suggests that the allocation of visual attention during deliberation is closely linked to subsequent choices so that alternatives that are looked at longer are generally more likely to be chosen. Prior work has characterized the processes underlying simple decisions in terms of evidence accumulation over time, where momentary rates of accumulation depend on the decision maker's allocation of gaze, and a decision is made when accumulated evidence reaches a certain threshold. However, the generalisability of gaze-dependent accumulation remains unclear in multiple regards: It is not established how well gaze-dependent evidence accumulation describes individual decision makers' behaviour or to what extent the association between visual attention and choice varies between individuals. In addition, it is unclear to what extent the theory applies to behaviour in contexts where choices deviate from normative predictions more substantially. Finally, it remains debated whether visual attention causally influences or rather reflects the construction of preferences. In this thesis, I address these questions across three empirical studies using computational models of the decision process. In Study 1 (Thomas et al., 2019; Molter et al., 2019), we first developed a novel gaze-dependent evidence accumulation model that allowed investigation of choice processes on the individual level. In addition, we published a corresponding Python toolbox to facilitate its use by others. In Study 2 (Molter et al., 2021), we tested the gaze-dependent evidence accumulation framework in a multi-alternative, multi-attribute task involving choices between three risky gambles. The task was designed to elicit context effects in choice behaviour that challenge normative choice theories. These context effects describe preference changes depending on the set of available alternatives. We found not only choices but also decision makers' gaze allocation to be modulated by context, allowing a gaze-dependent evidence accumulation model derived from prior work to generalise to this more complex scenario. Using this new tool, we demonstrated that gaze-dependent evidence accumulation accurately captures individuals' choice and response time data and associations with gaze allocation across four simple choice data sets. Our analysis revealed, however, that individuals strongly differed in the degree to which choices and gaze allocation were linked and that this variability was associated with individual differences in choice consistency. In our preregistered Study 3 (Molter & Mohr, 2021), we finally addressed the causal direction of the association between visual attention and choice. Participants made repeated choices between two risky gambles whose attributes were presented sequentially. This allowed the experimental control of the stimuli's presentation duration and order. Our results confirmed a causal influence of information search on preference construction. However, we identified presentation order, not duration, as the influencing factor, as alternatives presented last were chosen more frequently. Notably, causal order effects are only predicted by some gaze-dependent evidence accumulation models, highlighting potential for future theory development. The studies generally confirmed positive associations between visual attention and choice and provided support for gaze-dependent evidence accumulation theories on the individual level and in more complex choice scenarios. However, our studies also revealed large individual differences and possible limitations of current computational models of decision making. We showed that accounting for those differences and implementing additional mechanisms like accumulation leak to predict acquisition order effects substantially improve prediction of individual choice behaviour. Finally, I discuss these results on the active role of visual attention in the decision process and the theoretical model of gaze-dependent evidence accumulation in the broader context of constructed preferences and outline potential implications for the model-based analysis of choice and eye movement data.Wie treffen Menschen einfache Entscheidungen, zum Beispiel die Auswahl eines Frühstücks am Hotelbuffet? Anders, als es normative Theorien zur Entscheidungsfindung voraussetzen, sind unsere Präferenzen oft nicht starr, sondern werden erst zum Zeitpunkt der Entscheidung durch die Interaktion des Entscheiders mit seiner Umgebung konstruiert. Empirische Befunde zeigen, dass die Verteilung visueller Aufmerksamkeit während des Entscheidungsprozesses eng mit den getroffenen Entscheidungen zusammenhängt, wobei ein längerer Blick auf eine Alternative mit einer höheren Wahrscheinlichkeit verbunden ist, diese auszuwählen. Frühere Arbeiten haben die Prozesse, die solchen einfachen Entscheidungen zugrunde liegen, als Akkumulation von Evidenz über Zeit charakterisiert, wobei die Akkumulationsrate zu jedem Zeitpunkt von der Blickrichtung des Entscheiders abhängt. Danach wird eine Entscheidung getroffen, sobald die Evidenz für eine Alternative einen bestimmten Grenzwert überschreitet. Es ist jedoch in mehrerlei Hinsicht unklar, inwiefern diese Theorie von blickabhängiger Evidenzakkumulation generalisierbar ist. Zum einen ist nicht sichergestellt, dass blickabhängige Evidenzakkumulation das Verhalten \emph{einzelner} Entscheider erfasst, und in welchem Ausmaß der Zusammenhang visueller Aufmerksamkeit und Entscheidung zwischen Personen variiert. Zum anderen ist unklar, ob dieser Erklärungsansatz in Situationen Bestand hat, in denen Entscheidungen deutlich von normativen Vorhersagen abweichen. Zuletzt bleibt umstritten, ob visuelle Aufmerksamkeit kausalen Einfluss auf Entscheidungsprozesse hat oder vielmehr die Konstruktion von Präferenzen nur abbildet. Die vorliegende Dissertation soll diese Fragen in drei empirischen Studien unter Nutzung computerbasierter Modelle des Entscheidungsprozesses beantworten. In Studie 1 (Thomas et al., 2019; Molter et al., 2019) wurde zunächst ein neuartiges blickabhängiges Evidenzakkumulationsmodell entwickelt, das die Untersuchung des Entscheidungsprozesses einzelner Entscheider erlaubt. Hierzu wurde zusätzlich eine Python-Software-Toolbox veröffentlicht, die auch anderen Forschungsgruppen eine Anwendung des Modells ermöglicht. In vier verschiedenen Datensätzen konnten wir mithilfe dieses Werkzeugs zeigen, dass blickabhängige Evidenzakkumulation präzise Vorhersagen über Entscheidungen, Antwortzeiten und deren Zusammenhänge mit visueller Aufmerksamkeit für einzelne Entscheider macht. Unsere Analysen zeigten jedoch auch, dass Individuen große Unterschiede beim Zusammenhang von visueller Aufmerksamkeit und Entscheidung aufwiesen. Diese individuellen Unterschiede gingen zudem mit individuellen Unterschieden in der Konsistenz, mit der Entscheidungen getroffen wurden, einher. In Studie 2 (Molter et al., 2021) wurde das Konzept blickabhängiger Evidenzakkumulation in einer Entscheidungsaufgabe geprüft, in der drei risikobehaftete Lotterien als Alternativen mit mehreren Attributen zur Auswahl standen. Die Aufgabe wurde so entwickelt, dass Kontexteffekte im Entscheidungsverhalten auftreten sollten. Kontexteffekte beschreiben Präferenzänderungen in Abhängigkeit der verfügbaren Alternativen und stellen starke Abweichungen von normativen Vorhersagen dar. Die Ergebnisse zeigten, dass nicht nur das Entscheidungsverhalten, sondern auch die Verteilung visueller Aufmerksamkeit vom Kontext der verfügbaren Alternativen moduliert wurde. Dies ermöglichte es einem aus Vorarbeiten abgeleiteten blickabhängigen Evidenzakkumulationsmodell, Entscheidungsverhalten in diesem komplexen Szenario zu erfassen. Zuletzt wurde in einer prä-registrierten dritten Studie (Molter & Mohr, 2021) die Richtung des Kausalitätszusammenhangs zwischen visueller Aufmerksamkeit und Entscheidung beleuchtet. In unserem Experiment trafen Teilnehmer wiederholte Entscheidungen zwischen zwei Lotterien, deren Attribute sequenziell präsentiert wurden. Dies ermöglichte die experimentelle Kontrolle von Präsentationsdauer und Reihenfolge der Stimulusinformation. Die Ergebnisse bestätigten einen kausalen Einfluss der Informationssuche auf die Präferenzkonstruktion. Jedoch wurde hier die Präsentationsreihenfolge, nicht die Präsentationsdauer als Einflussfaktor identifiziert. Bemerkenswert ist hierbei, dass nur manche blickabhängigen Evidenzakkumulationsmodelle solche kausalen Einflüsse der Reihenfolge vorhersagen. Unsere Ergebnisse zeigen dementsprechend ein mögliches Potenzial für zukünftige Theorieentwicklung auf. Unsere Studien bestätigten grundsätzlich den positiven Zusammenhang zwischen visueller Aufmerksamkeit und Entscheidungen. Zudem unterstützen sie Theorien blickabhängiger Evidenzakkumulation im Rahmen individueller und komplexer Entscheidungen. Die Analysen haben allerdings auch bedeutende individuelle Unterschiede und mögliche Grenzen aktueller Modelle sichtbar gemacht. Hier konnten wir jedoch zeigen, dass die Berücksichtigung solcher Unterschiede und die Hinzunahme zusätzlicher Mechanismen wie imperfekter Akkumulation die Vorhersage individuellen Verhaltens erheblich verbessert. Zum Abschluss der Arbeit werden diese Ergebnisse einer aktiven Rolle visueller Aufmerksamkeit im Entscheidungsprozess sowie das theoretische Modell blickabhängiger Evidenzakkumulation im weiteren Kontext konstruierter Präferenzen diskutiert und mögliche Implikationen für die computermodellbasierte Analyse von Entscheidungs- und Blickbewegungsdaten aufgezeigt
    corecore