248 research outputs found

    Adaptive Finite Element Simulation of Currents at Microelectrodes to a Guaranteed Accuracy. Application to Channel Microband Electrodes.

    Get PDF
    We extend our earlier work (see K. Harriman et al., Technical Report NA99/19) on adaptive finite element methods for disc electrodes to the case of reaction mechanisms to the increasingly popular channel microband electrode configuration. We use the standard Galerkin finite element method for the diffusion-dominated (low-flow) case, and the streamline diffusion finite element method for the convection-dominated (high-flow) case. We first consider the simple E reaction mechanism (convection-diffusion equation) and we demonstrate excellent agreement with previous approximate analytical results across the range of parameters of interest, on comparatively coarse meshes. We then consider ECE and EC2E reaction mechanisms (linear and nonlinear systems of reaction-convection-diffusion equations, respectively); again we are able to demonstrate excellent agreement with previous results.\ud \ud The authors are pleased to acknowledge the financial support of the following organisations: a research studentship for KH; a Career Development Fellowship from the Medical Research Council for DJG, which has allowed them to undertake this research

    Separating the effects of experimental noise from inherent system variability in voltammetry: the [[Fe(CN)6]3−/4−_6]^{3-/ 4-} process

    Full text link
    Recently, we have introduced the use of techniques drawn from Bayesian statistics to recover kinetic and thermodynamic parameters from voltammetric data, and were able to show that the technique of large amplitude ac voltammetry yielded significantly more accurate parameter values than the equivalent dc approach. In this paper we build on this work to show that this approach allows us, for the first time, to separate the effects of random experimental noise and inherent system variability in voltammetric experiments. We analyse ten repeated experimental data sets for the [[Fe(CN)6]3−/4−_6]^{3-/ 4-} process, again using large-amplitude ac cyclic voltammetry. In each of the ten cases we are able to obtain an extremely good fit to the experimental data and obtain very narrow distributions of the recovered parameters governing both the faradaic (the reversible formal faradaic potential, E0E_0, the standard heterogeneous charge transfer rate constant k0k_0, and the charge transfer coefficient α\alpha) and non-faradaic terms (uncompensated resistance, RuR_u, and double layer capacitance, CdlC_{dl}). We then employ hierarchical Bayesian methods to recover the underlying "hyperdistribution" of the faradaic and non-faradaic parameters, showing that in general the variation between the experimental data sets is significantly greater than suggested by individual experiments, except for α\alpha where the inter-experiment variation was relatively minor. Correlations between pairs of parameters are provided, and for example, reveal a weak link between k0k_0 and CdlC_{dl} (surface activity of a glassy carbon electrode surface). Finally, we discuss the implications of our findings for voltammetric experiments more generally.Comment: 30 pages, 6 figure

    Validity of the Cauchy-Born rule applied to discrete cellular-scale models of biological tissues.

    Get PDF
    The development of new models of biological tissues that consider cells in a discrete manner is becoming increasingly popular as an alternative to continuum methods based on partial differential equations, although formal relationships between the discrete and continuum frameworks remain to be established. For crystal mechanics, the discrete-to-continuum bridge is often made by assuming that local atom displacements can be mapped homogeneously from the mesoscale deformation gradient, an assumption known as the Cauchy-Born rule (CBR). Although the CBR does not hold exactly for noncrystalline materials, it may still be used as a first-order approximation for analytic calculations of effective stresses or strain energies. In this work, our goal is to investigate numerically the applicability of the CBR to two-dimensional cellular-scale models by assessing the mechanical behavior of model biological tissues, including crystalline (honeycomb) and noncrystalline reference states. The numerical procedure involves applying an affine deformation to the boundary cells and computing the quasistatic position of internal cells. The position of internal cells is then compared with the prediction of the CBR and an average deviation is calculated in the strain domain. For center-based cell models, we show that the CBR holds exactly when the deformation gradient is relatively small and the reference stress-free configuration is defined by a honeycomb lattice. We show further that the CBR may be used approximately when the reference state is perturbed from the honeycomb configuration. By contrast, for vertex-based cell models, a similar analysis reveals that the CBR does not provide a good representation of the tissue mechanics, even when the reference configuration is defined by a honeycomb lattice. The paper concludes with a discussion of the implications of these results for concurrent discrete and continuous modeling, adaptation of atom-to-continuum techniques to biological tissues, and model classification

    Hierarchical Bayesian inference for ion channel screening dose-response data

    Get PDF
    Dose-response (or 'concentration-effect') relationships commonly occur in biological and pharmacological systems and are well characterised by Hill curves. These curves are described by an equation with two parameters: the inhibitory concentration 50% (IC50); and the Hill coefficient. Typically just the 'best fit' parameter values are reported in the literature. Here we introduce a Python-based software tool, PyHillFit , and describe the underlying Bayesian inference methods that it uses, to infer probability distributions for these parameters as well as the level of experimental observation noise. The tool also allows for hierarchical fitting, characterising the effect of inter-experiment variability. We demonstrate the use of the tool on a recently published dataset on multiple ion channel inhibition by multiple drug compounds. We compare the maximum likelihood, Bayesian and hierarchical Bayesian approaches. We then show how uncertainty in dose-response inputs can be characterised and propagated into a cardiac action potential simulation to give a probability distribution on model outputs

    Early afterdepolarisation tendency as a simulated pro-arrhythmic risk indicator

    Get PDF
    Drug-induced Torsades de Pointes (TdP) arrhythmia is of major interest in predictive toxicology. Drugs which cause TdP block the hERG cardiac potassium channel. However, not all drugs that block hERG cause TdP. As such, further understanding of the mechanistic route to TdP is needed. Early afterdepolarisations (EADs) are a cell-level phenomenon in which the membrane of a cardiac cell depolarises a second time before repolarisation, and EADs are seen in hearts during TdP. Therefore, we propose a method of predicting TdP using induced EADs combined with multiple ion channel block in simulations using biophysically-based mathematical models of human ventricular cell electrophysiology. EADs were induced in cardiac action potential models using interventions based on diseases that are known to cause EADs, including: increasing the conduction of the L-type calcium channel, decreasing the conduction of the hERG channel, and shifting the inactivation curve of the fast sodium channel. The threshold of intervention that was required to cause an EAD was used to classify drugs into clinical risk categories. The metric that used L-type calcium induced EADs was the most accurate of the EAD metrics at classifying drugs into the correct risk categories, and increased in accuracy when combined with action potential duration measurements. The EAD metrics were all more accurate than hERG block alone, but not as predictive as simpler measures such as simulated action potential duration. This may be because different routes to EADs represent risk well for different patient subgroups, something that is difficult to assess at present

    Simulating clinical trials for model-informed precision dosing: using warfarin treatment as a use case

    Get PDF
    Treatment response variability across patients is a common phenomenon in clinical practice. For many drugs this inter-individual variability does not require much (if any) individualisation of dosing strategies. However, for some drugs, including chemotherapies and some monoclonal antibody treatments, individualisation of dosages are needed to avoid harmful adverse events. Model-informed precision dosing (MIPD) is an emerging approach to guide the individualisation of dosing regimens of otherwise difficult-to-administer drugs. Several MIPD approaches have been suggested to predict dosing strategies, including regression, reinforcement learning (RL) and pharmacokinetic and pharmacodynamic (PKPD) modelling. A unified framework to study the strengths and limitations of these approaches is missing. We develop a framework to simulate clinical MIPD trials, providing a cost and time efficient way to test different MIPD approaches. Central for our framework is a clinical trial model that emulates the complexities in clinical practice that challenge successful treatment individualisation. We demonstrate this framework using warfarin treatment as a use case and investigate three popular MIPD methods: 1. Neural network regression; 2. Deep RL; and 3. PKPD modelling. We find that the PKPD model individualises warfarin dosing regimens with the highest success rate and the highest efficiency: 75.1% of the individuals display INRs inside the therapeutic range at the end of the simulated trial; and the median time in the therapeutic range (TTR) is 74%. In comparison, the regression model and the deep RL model have success rates of 47.0% and 65.8%, and median TTRs of 45% and 68%. We also find that the MIPD models can attain different degrees of individualisation: the Regression model individualises dosing regimens up to variability explained by covariates; the Deep RL model and the PKPD model individualise dosing regimens accounting also for additional variation using monitoring data. However, the Deep RL model focusses on control of the treatment response, while the PKPD model uses the data also to further the individualisation of predictions

    Gaussian process emulation for discontinuous response surfaces with applications for cardiac electrophysiology models

    Get PDF
    Mathematical models of biological systems are beginning to be used for safety-critical applications, where large numbers of repeated model evaluations are required to perform uncertainty quantification and sensitivity analysis. Most of these models are nonlinear both in variables and parameters/inputs which has two consequences. First, analytic solutions are rarely available so repeated evaluation of these models by numerically solving differential equations incurs a significant computational burden. Second, many models undergo bifurcations in behaviour as parameters are varied. As a result, simulation outputs often contain discontinuities as we change parameter values and move through parameter/input space. Statistical emulators such as Gaussian processes are frequently used to reduce the computational cost of uncertainty quantification, but discontinuities render a standard Gaussian process emulation approach unsuitable as these emulators assume a smooth and continuous response to changes in parameter values. In this article, we propose a novel two-step method for building a Gaussian Process emulator for models with discontinuous response surfaces. We first use a Gaussian Process classifier to detect boundaries of discontinuities and then constrain the Gaussian Process emulation of the response surface within these boundaries. We introduce a novel `certainty metric' to guide active learning for a multi-class probabilistic classifier. We apply the new classifier to simulations of drug action on a cardiac electrophysiology model, to propagate our uncertainty in a drug's action through to predictions of changes to the cardiac action potential. The proposed two-step active learning method significantly reduces the computational cost of emulating models that undergo multiple bifurcations

    Filter inference: a scalable nonlinear mixed effects inference approach for snapshot time series data

    Get PDF
    Variability is an intrinsic property of biological systems and is often at the heart of their complex behaviour. Examples range from cell-to-cell variability in cell signalling pathways to variability in the response to treatment across patients. A popular approach to model and understand this variability is nonlinear mixed effects (NLME) modelling. However, estimating the parameters of NLME models from measurements quickly becomes computationally expensive as the number of measured individuals grows, making NLME inference intractable for datasets with thousands of measured individuals. This shortcoming is particularly limiting for snapshot datasets, common e.g. in cell biology, where high-throughput measurement techniques provide large numbers of single cell measurements. We introduce a novel approach for the estimation of NLME model parameters from snapshot measurements, which we call filter inference. Filter inference uses measurements of simulated individuals to define an approximate likelihood for the model parameters, avoiding the computational limitations of traditional NLME inference approaches and making efficient inferences from snapshot measurements possible. Filter inference also scales well with the number of model parameters, using state-of-the-art gradient-based MCMC algorithms such as the No-U-Turn Sampler (NUTS). We demonstrate the properties of filter inference using examples from early cancer growth modelling and from epidermal growth factor signalling pathway modelling
    • …
    corecore