120 research outputs found

    Autocorrelated measurement processes and inference for ordinary differential equation models of biological systems

    Get PDF
    Ordinary differential equation models are used to describe dynamic processes across biology. To perform likelihood-based parameter inference on these models, it is necessary to specify a statistical process representing the contribution of factors not explicitly included in the mathematical model. For this, independent Gaussian noise is commonly chosen, with its use so widespread that researchers typically provide no explicit justification for this choice. This noise model assumes `random' latent factors affect the system in ephemeral fashion resulting in unsystematic deviation of observables from their modelled counterparts. However, like the deterministically modelled parts of a system, these latent factors can have persistent effects on observables. Here, we use experimental data from dynamical systems drawn from cardiac physiology and electrochemistry to demonstrate that highly persistent differences between observations and modelled quantities can occur. Considering the case when persistent noise arises due only to measurement imperfections, we use the Fisher information matrix to quantify how uncertainty in parameter estimates is artificially reduced when erroneously assuming independent noise. We present a workflow to diagnose persistent noise from model fits and describe how to remodel accounting for correlated errors

    Considering discrepancy when calibrating a mechanistic electrophysiology model

    Get PDF
    Uncertainty quantification (UQ) is a vital step in using mathematical models and simulations to take decisions. The field of cardiac simulation has begun to explore and adopt UQ methods to characterize uncertainty in model inputs and how that propagates through to outputs or predictions; examples of this can be seen in the papers of this issue. In this review and perspective piece, we draw attention to an important and under-addressed source of uncertainty in our predictions—that of uncertainty in the model structure or the equations themselves. The difference between imperfect models and reality is termed model discrepancy, and we are often uncertain as to the size and consequences of this discrepancy. Here, we provide two examples of the consequences of discrepancy when calibrating models at the ion channel and action potential scales. Furthermore, we attempt to account for this discrepancy when calibrating and validating an ion channel model using different methods, based on modelling the discrepancy using Gaussian processes and autoregressive-moving-average models, then highlight the advantages and shortcomings of each approach. Finally, suggestions and lines of enquiry for future work are provided. This article is part of the theme issue ‘Uncertainty quantification in cardiac and cardiovascular modelling and simulation’

    Recognising, Representing and Mapping Natural Features in Unstructured Environments

    Get PDF
    This thesis addresses the problem of building statistical models for multi-sensor perception in unstructured outdoor environments. The perception problem is divided into three distinct tasks: recognition, representation and association. Recognition is cast as a statistical classification problem where inputs are images or a combination of images and ranging information. Given the complexity and variability of natural environments, this thesis investigates the use of Bayesian statistics and supervised dimensionality reduction to incorporate prior information and fuse sensory data. A compact probabilistic representation of natural objects is essential for many problems in field robotics. This thesis presents techniques for combining non-linear dimensionality reduction with parametric learning through Expectation Maximisation to build general representations of natural features. Once created these models need to be rapidly processed to account for incoming information. To this end, techniques for efficient probabilistic inference are proposed. The robustness of localisation and mapping algorithms is directly related to reliable data association. Conventional algorithms employ only geometric information which can become inconsistent for large trajectories. A new data association algorithm incorporating visual and geometric information is proposed to improve the reliability of this task. The method uses a compact probabilistic representation of objects to fuse visual and geometric information for the association decision. The main contributions of this thesis are: 1) a stochastic representation of objects through non-linear dimensionality reduction; 2) a landmark recognition system using a visual and ranging sensors; 3) a data association algorithm combining appearance and position properties; 4) a real-time algorithm for detection and segmentation of natural objects from few training images and 5) a real-time place recognition system combining dimensionality reduction and Bayesian learning. The theoretical contributions of this thesis are demonstrated with a series of experiments in unstructured environments. In particular, the combination of recognition, representation and association algorithms is applied to the Simultaneous Localisation and Mapping problem (SLAM) to close large loops in outdoor trajectories, proving the benefits of the proposed methodology

    Probabilistic inductive constraint logic

    Get PDF
    AbstractProbabilistic logical models deal effectively with uncertain relations and entities typical of many real world domains. In the field of probabilistic logic programming usually the aim is to learn these kinds of models to predict specific atoms or predicates of the domain, called target atoms/predicates. However, it might also be useful to learn classifiers for interpretations as a whole: to this end, we consider the models produced by the inductive constraint logic system, represented by sets of integrity constraints, and we propose a probabilistic version of them. Each integrity constraint is annotated with a probability, and the resulting probabilistic logical constraint model assigns a probability of being positive to interpretations. To learn both the structure and the parameters of such probabilistic models we propose the system PASCAL for "probabilistic inductive constraint logic". Parameter learning can be performed using gradient descent or L-BFGS. PASCAL has been tested on 11 datasets and compared with a few statistical relational systems and a system that builds relational decision trees (TILDE): we demonstrate that this system achieves better or comparable results in terms of area under the precision–recall and receiver operating characteristic curves, in a comparable execution time

    Rapid Characterization of hERG Channel Kinetics I: Using an Automated High-Throughput System

    Get PDF
    Predicting how pharmaceuticals may affect heart rhythm is a crucial step in drug-development, and requires a deep understanding of a compound’s action on ion channels. In vitro hERG-channel current recordings are an important step in evaluating the pro-arrhythmic potential of small molecules, and are now routinely performed using automated high-throughput patch clamp platforms. These machines can execute traditional voltage clamp protocols aimed at specific gating processes, but the array of protocols needed to fully characterise a current is typically too long to be applied in a single cell. Shorter high-information protocols have recently been introduced which have this capability, but they are not typically compatible with high-throughput platforms. We present a new 15 second protocol to characterise hERG (Kv11.1) kinetics, suitable for both manual and high-throughput systems. We demonstrate its use on the Nanion SyncroPatch 384PE, a 384 well automated patch clamp platform, by applying it to CHO cells stably expressing hERG1a. From these recordings we construct 124 cell-specific variants/parameterisations of a hERG model at 25C. A further 8 independent protocols are run in each cell, and are used to validate the model predictions. We then combine the experimental recordings using a hierarchical Bayesian model, which we use to quantify the uncertainty in the model parameters, and their variability from cell to cell, which we use to suggest reasons for the variability. This study demonstrates a robust method to measure and quantify uncertainty, and shows that it is possible and practical to use high-throughput systems to capture full hERG channel kinetics quantitatively and rapidly

    Model-driven optimal experimental design for calibrating cardiac electrophysiology models

    Get PDF
    Background and Objective: Models of the cardiomyocyte action potential have contributed immensely to the understanding of heart function, pathophysiology, and the origin of heart rhythm disturbances. However, action potential models are highly nonlinear, making them difficult to parameterise and limiting to describing ‘average cell’ dynamics, when cell-specific models would be ideal to uncover inter-cell variability but are too experimentally challenging to be achieved. Here, we focus on automatically designing experimental protocols that allow us to better identify cell-specific maximum conductance values for each major current type.Methods and Results: We developed an approach that applies optimal experimental designs to patch-clamp experiments, including both voltage-clamp and current-clamp experiments. We assessed the models calibrated to these new optimal designs by comparing them to the models calibrated to some of the commonly used designs in the literature. We showed that optimal designs are not only overall shorter in duration but also able to perform better than many of the existing experiment designs in terms of identifying model parameters and hence model predictive power.Conclusions: For cardiac cellular electrophysiology, this approach will allow researchers to define their hypothesis of the dynamics of the system and automatically design experimental protocols that will result in theoretically optimal designs

    Accommodating maintenance in prognostics

    Get PDF
    Error on title page - year of award is 2021Steam turbines are an important asset of nuclear power plants, and are required to operate reliably and efficiently. Unplanned outages have a significant impact on the ability of the plant to generate electricity. Therefore, condition-based maintenance (CBM) can be used for predictive and proactive maintenance to avoid unplanned outages while reducing operating costs and increasing the reliability and availability of the plant. In CBM, the information gathered can be interpreted for prognostics (the prediction of failure time or remaining useful life (RUL)). The aim of this project was to address two areas of challenges in prognostics, the selection of predictive technique and accommodation of post-maintenance effects, to improve the efficacy of prognostics. The selection of an appropriate predictive algorithm is a key activity for an effective development of prognostics. In this research, a formal approach for the evaluation and selection of predictive techniques is developed to facilitate a methodic selection process of predictive techniques by engineering experts. This approach is then implemented for a case study provided by the engineering experts. Therefore, as a result of formal evaluation, a probabilistic technique the Bayesian Linear Regression (BLR) and a non-probabilistic technique the Support Vector Regression (SVR) were selected for prognostics implementation. In this project, the knowledge of prognostics implementation is extended by including post maintenance affects into prognostics. Maintenance aims to restore a machine into a state where it is safe and reliable to operate while recovering the health of the machine. However, such activities result in introduction of uncertainties that are associated with predictions due to deviations in degradation model. Thus, affecting accuracy and efficacy of predictions. Therefore, such vulnerabilities must be addressed by incorporating the information from maintenance events for accurate and reliable predictions. This thesis presents two frameworks which are adapted for probabilistic and non-probabilistic prognostic techniques to accommodate maintenance. Two case studies: a real-world case study from a nuclear power plant in the UK and a synthetic case study which was generated based on the characteristics of a real-world case study are used for the implementation and validation of the frameworks. The results of the implementation hold a promise for predicting remaining useful life while accommodating maintenance repairs. Therefore, ensuring increased asset availability with higher reliability, maintenance cost effectiveness and operational safety.Steam turbines are an important asset of nuclear power plants, and are required to operate reliably and efficiently. Unplanned outages have a significant impact on the ability of the plant to generate electricity. Therefore, condition-based maintenance (CBM) can be used for predictive and proactive maintenance to avoid unplanned outages while reducing operating costs and increasing the reliability and availability of the plant. In CBM, the information gathered can be interpreted for prognostics (the prediction of failure time or remaining useful life (RUL)). The aim of this project was to address two areas of challenges in prognostics, the selection of predictive technique and accommodation of post-maintenance effects, to improve the efficacy of prognostics. The selection of an appropriate predictive algorithm is a key activity for an effective development of prognostics. In this research, a formal approach for the evaluation and selection of predictive techniques is developed to facilitate a methodic selection process of predictive techniques by engineering experts. This approach is then implemented for a case study provided by the engineering experts. Therefore, as a result of formal evaluation, a probabilistic technique the Bayesian Linear Regression (BLR) and a non-probabilistic technique the Support Vector Regression (SVR) were selected for prognostics implementation. In this project, the knowledge of prognostics implementation is extended by including post maintenance affects into prognostics. Maintenance aims to restore a machine into a state where it is safe and reliable to operate while recovering the health of the machine. However, such activities result in introduction of uncertainties that are associated with predictions due to deviations in degradation model. Thus, affecting accuracy and efficacy of predictions. Therefore, such vulnerabilities must be addressed by incorporating the information from maintenance events for accurate and reliable predictions. This thesis presents two frameworks which are adapted for probabilistic and non-probabilistic prognostic techniques to accommodate maintenance. Two case studies: a real-world case study from a nuclear power plant in the UK and a synthetic case study which was generated based on the characteristics of a real-world case study are used for the implementation and validation of the frameworks. The results of the implementation hold a promise for predicting remaining useful life while accommodating maintenance repairs. Therefore, ensuring increased asset availability with higher reliability, maintenance cost effectiveness and operational safety

    Rapid Characterization of hERG Channel Kinetics II: Temperature Dependence

    Get PDF
    © 2019 Biophysical Society Ion channel behavior can depend strongly on temperature, with faster kinetics at physiological temperatures leading to considerable changes in currents relative to room temperature. These temperature-dependent changes in voltage-dependent ion channel kinetics (rates of opening, closing, inactivating, and recovery) are commonly represented with Q10 coefficients or an Eyring relationship. In this article, we assess the validity of these representations by characterizing channel kinetics at multiple temperatures. We focus on the human Ether-à-go-go-Related Gene (hERG) channel, which is important in drug safety assessment and commonly screened at room temperature so that results require extrapolation to physiological temperature. In Part I of this study, we established a reliable method for high-throughput characterization of hERG1a (Kv11.1) kinetics, using a 15-second information-rich optimized protocol. In this Part II, we use this protocol to study the temperature dependence of hERG kinetics using Chinese hamster ovary cells overexpressing hERG1a on the Nanion SyncroPatch 384PE, a 384-well automated patch-clamp platform, with temperature control. We characterize the temperature dependence of hERG gating by fitting the parameters of a mathematical model of hERG kinetics to data obtained at five distinct temperatures between 25 and 37°C and validate the models using different protocols. Our models reveal that activation is far more temperature sensitive than inactivation, and we observe that the temperature dependency of the kinetic parameters is not represented well by Q10 coefficients; it broadly follows a generalized, but not the standardly-used, Eyring relationship. We also demonstrate that experimental estimations of Q10 coefficients are protocol dependent. Our results show that a direct fit using our 15-s protocol best represents hERG kinetics at any given temperature and suggests that using the Generalized Eyring theory is preferable if no experimental data are available to derive model parameters at a given temperature
    corecore