440 research outputs found
Exponential integrators for a Markov chain model of the fast sodium channel of cardiomyocytes
The modern Markov chain models of ionic channels in excitable membranes are
numerically stiff. The popular numerical methods for these models require very
small time steps to ensure stability. Our objective is to formulate and test
two methods addressing this issue, so that the timestep can be chosen based on
accuracy rather than stability.
Both proposed methods extend Rush-Larsen technique, which was originally
developed to Hogdkin-Huxley type gate models. One method, "Matrix Rush-Larsen"
(MRL) uses a matrix reformulation of the Rush-Larsen scheme, where the matrix
exponentials are calculated using precomputed tables of eigenvalues and
eigenvectors. The other, "hybrid operator splitting" (HOS) method exploits
asymptotic properties of a particular Markov chain model, allowing explicit
analytical expressions for the substeps.
We test both methods on the Clancy and Rudy (2002) INa Markov chain model.
With precomputed tables for functions of the transmembrane voltage, both
methods are comparable to the forward Euler method in accuracy and
computational cost, but allow longer time steps without numerical instability.
We conclude that both methods are of practical interest. MRL requires more
computations than HOS, but is formulated in general terms which can be readily
extended to other Markov Chain channel models, whereas the utility of HOS
depends on the asymptotic properties of a particular model.
The significance of the methods is that they allow a considerable speed-up of
large-scale computations of cardiac excitation models by increasing the time
step, while maintaining acceptable accuracy and preserving numerical stability.Comment: 9 pages, 5 figures main text + 14 pages, 1 figure appendix, as
submitted in final form to IEEE TBME 2014/11/11. Copyright IEEE (2014
Fast-slow asymptotics for a Markov chain model of fast sodium current
We explore the feasibility of using fast-slow asymptotic to eliminate the
computational stiffness of the discrete-state, continuous-time deterministic
Markov chain models of ionic channels underlying cardiac excitability. We focus
on a Markov chain model of the fast sodium current, and investigate its
asymptotic behaviour with respect to small parameters identified in different
ways.Comment: 16 pages, 6 figures, as accepted to Chaos 2017/09/0
Mathematical and Computational Study of Markovian Models of Ion Channels in Cardiac Excitation
This thesis studies numerical methods for integrating the master equations describing Markov chain models of cardiac ion channels. Such models describe the time evolution of the probability that ion channels are in a particular state. Numerical simulations of such models are often computationally demanding because many solvers require relatively small time steps to ensure numerical stability. The aim of this project is to analyse selected Markov chains and develop more efficient and accurate solvers.
We separate a Markov chain model into fast and slow time-scales based on the speed of transitions between states. Eliminating the fast transitions, we find an asymptotic reduction of zeroth-order and first-order in a small parameter describing the time-scales separation. We apply the theory to a Markov chain model of the fast sodium channel INa. We consider several variants for classifying some transitions as fast in order to find reduced systems that yield a good accuracy. However, the time step size is still restricted by numerical instabilities.
We adapt the Rush-Larsen technique originally developed for gate models. Assuming that a transition matrix can be considered constant during each time step, we solve the Markov chain model analytically. The solution provides a recipe for a stable exponential solver, which we call "Matrix Rush-Larsen" (MRL). Using operator splitting we design an even more flexible "hybrid" method that combines the MRL with other solvers. The resulting improvement in stability allows a large increase in the time step size. In some models, we obtain reasonably accurate results 27 times faster using a hybrid method than with the forward Euler method, even with the maximal time step allowed by the stability constraint.
Finally, we extend the cardiac simulation package BeatBox by the developed exponential solvers. We upgrade a format of "ionic" modules which describe a cardiac cell, in order to allow for a specific definition of Markov chain models. We also modify a particular integrator for ionic modules to include the MRL and the hybrid method. To test the functionality of the code, we have converted a number of cellular models into the ionic format. The documented code is available in the official BeatBox package distribution
Les Houches Guidebook to Monte Carlo Generators for Hadron Collider Physics
Recently the collider physics community has seen significant advances in the
formalisms and implementations of event generators. This review is a primer of
the methods commonly used for the simulation of high energy physics events at
particle colliders. We provide brief descriptions, references, and links to the
specific computer codes which implement the methods. The aim is to provide an
overview of the available tools, allowing the reader to ascertain which tool is
best for a particular application, but also making clear the limitations of
each tool.Comment: 49 pages Latex. Compiled by the Working Group on Quantum
ChromoDynamics and the Standard Model for the Workshop ``Physics at TeV
Colliders'', Les Houches, France, May 2003. To appear in the proceeding
The Magnus expansion and some of its applications
Approximate resolution of linear systems of differential equations with
varying coefficients is a recurrent problem shared by a number of scientific
and engineering areas, ranging from Quantum Mechanics to Control Theory. When
formulated in operator or matrix form, the Magnus expansion furnishes an
elegant setting to built up approximate exponential representations of the
solution of the system. It provides a power series expansion for the
corresponding exponent and is sometimes referred to as Time-Dependent
Exponential Perturbation Theory. Every Magnus approximant corresponds in
Perturbation Theory to a partial re-summation of infinite terms with the
important additional property of preserving at any order certain symmetries of
the exact solution. The goal of this review is threefold. First, to collect a
number of developments scattered through half a century of scientific
literature on Magnus expansion. They concern the methods for the generation of
terms in the expansion, estimates of the radius of convergence of the series,
generalizations and related non-perturbative expansions. Second, to provide a
bridge with its implementation as generator of especial purpose numerical
integration methods, a field of intense activity during the last decade. Third,
to illustrate with examples the kind of results one can expect from Magnus
expansion in comparison with those from both perturbative schemes and standard
numerical integrators. We buttress this issue with a revision of the wide range
of physical applications found by Magnus expansion in the literature.Comment: Report on the Magnus expansion for differential equations and its
applications to several physical problem
Molecular Dynamics Simulation
Condensed matter systems, ranging from simple fluids and solids to complex multicomponent materials and even biological matter, are governed by well understood laws of physics, within the formal theoretical framework of quantum theory and statistical mechanics. On the relevant scales of length and time, the appropriate ‘first-principles’ description needs only the Schroedinger equation together with Gibbs averaging over the relevant statistical ensemble. However, this program cannot be carried out straightforwardly—dealing with electron correlations is still a challenge for the methods of quantum chemistry. Similarly, standard statistical mechanics makes precise explicit statements only on the properties of systems for which the many-body problem can be effectively reduced to one of independent particles or quasi-particles. [...
Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience
This essay is presented with two principal objectives in mind: first, to
document the prevalence of fractals at all levels of the nervous system, giving
credence to the notion of their functional relevance; and second, to draw
attention to the as yet still unresolved issues of the detailed relationships
among power law scaling, self-similarity, and self-organized criticality. As
regards criticality, I will document that it has become a pivotal reference
point in Neurodynamics. Furthermore, I will emphasize the not yet fully
appreciated significance of allometric control processes. For dynamic fractals,
I will assemble reasons for attributing to them the capacity to adapt task
execution to contextual changes across a range of scales. The final Section
consists of general reflections on the implications of the reviewed data, and
identifies what appear to be issues of fundamental importance for future
research in the rapidly evolving topic of this review
Recommended from our members
Hybrid Analog-Digital Co-Processing for Scientific Computation
In the past 10 years computer architecture research has moved to more heterogeneity and less adherence to conventional abstractions. Scientists and engineers hold an unshakable belief that computing holds keys to unlocking humanity's Grand Challenges. Acting on that belief they have looked deeper into computer architecture to find specialized support for their applications. Likewise, computer architects have looked deeper into circuits and devices in search of untapped performance and efficiency. The lines between computer architecture layers---applications, algorithms, architectures, microarchitectures, circuits and devices---have blurred. Against this backdrop, a menagerie of computer architectures are on the horizon, ones that forgo basic assumptions about computer hardware, and require new thinking of how such hardware supports problems and algorithms.
This thesis is about revisiting hybrid analog-digital computing in support of diverse modern workloads. Hybrid computing had extensive applications in early computing history, and has been revisited for small-scale applications in embedded systems. But architectural support for using hybrid computing in modern workloads, at scale and with high accuracy solutions, has been lacking.
I demonstrate solving a variety of scientific computing problems, including stochastic ODEs, partial differential equations, linear algebra, and nonlinear systems of equations, as case studies in hybrid computing. I solve these problems on a system of multiple prototype analog accelerator chips built by a team at Columbia University. On that team I made contributions toward programming the chips, building the digital interface, and validating the chips' functionality. The analog accelerator chip is intended for use in conjunction with a conventional digital host computer.
The appeal and motivation for using an analog accelerator is efficiency and performance, but it comes with limitations in accuracy and problem sizes that we have to work around.
The first problem is how to do problems in this unconventional computation model. Scientific computing phrases problems as differential equations and algebraic equations. Differential equations are a continuous view of the world, while algebraic equations are a discrete one. Prior work in analog computing mostly focused on differential equations; algebraic equations played a minor role in prior work in analog computing. The secret to using the analog accelerator to support modern workloads on conventional computers is that these two viewpoints are interchangeable. The algebraic equations that underlie most workloads can be solved as differential equations,
and differential equations are naturally solvable in the analog accelerator chip. A hybrid analog-digital computer architecture can focus on solving linear and nonlinear algebra problems to support many workloads.
The second problem is how to get accurate solutions using hybrid analog-digital computing. The reason that the analog computation model gives less accurate solutions is it gives up representing numbers as digital binary numbers, and instead uses the full range of analog voltage and current to represent real numbers. Prior work has established that encoding data in analog signals gives an energy efficiency advantage as long as the analog data precision is limited. While the analog accelerator alone may be useful for energy-constrained applications where inputs and outputs are imprecise, we are more interested in using analog in conjunction with digital for precise solutions. This thesis gives novel insight that the trick to do so is to solve nonlinear problems where low-precision guesses are useful for conventional digital algorithms.
The third problem is how to solve large problems using hybrid analog-digital computing. The reason the analog computation model can't handle large problems is it gives up step-by-step discrete-time operation, instead allowing variables to evolve smoothly in continuous time. To make that happen the analog accelerator works by chaining hardware for mathematical operations end-to-end. During computation analog data flows through the hardware with no overheads in control logic and memory accesses. The downside is then the needed hardware size grows alongside problem sizes. While scientific computing researchers have for a long time split large problems into smaller subproblems to fit in digital computer constraints, this thesis is a first attempt to consider these divide-and-conquer algorithms as an essential tool in using the analog model of computation.
As we enter the post-Moore’s law era of computing, unconventional architectures will offer specialized models of computation that uniquely support specific problem types. Two prominent examples are deep neural networks and quantum computers. Recent trends in computer science research show these unconventional architectures will soon have broad adoption. In this thesis I show another specialized, unconventional architecture is to use analog accelerators to solve problems in scientific computing. Computer architecture researchers will discover other important models of computation in the future. This thesis is an example of the discovery process, implementation, and evaluation of how an unconventional architecture supports specialized workloads
Tackling the development of hormone therapy resistance in breast cancer through mathematical modelling
Patients suffering from estrogen-driven breast cancer frequently develop hardly predictable resistance to hormone therapy, which significantly complicates treatment. Current approaches for tackling this problem include cell models and clinical studies, both supported by sequencing technologies like RNA-seq, and offering different strengths and limitations. This dissertation addresses the challenge of predicting resistance to hormone therapy in breast cancer by merging advances in bioinformatics and Bayesian statistics, and applying them to two types of data – RNA-seq data and clinical data. First, we explore the statistical analysis of clinical data through Bayesian inference combined with enhanced Markov Chain Monte Carlo techniques, and introduce a novel algorithm for adaptive integration in prospective Modified Hamiltonian Monte Carlo (MHMC) methods. We demonstrate its positive effect on performance of MHMC in biomedical applications using clinical data of breast cancer patients. Next, we propose and implement an RNA-seq pipeline within our interactive web-app for the analysis of resistant breast cancer cell lines sequenced at CIC bioGUNE. Finally, we propose an original approach based on a Bayesian logistics regression model coupled with a simulated annealing-like algorithm for a combined analysis of RNA-seq and clinical data, and apply it to ad hoc data to obtain and validate in-silico and in-vitro a novel 6-gene signature for stratifying patient response to hormone therapy
- …