1,544 research outputs found
Sieve estimation of constant and time-varying coefficients in nonlinear ordinary differential equation models by considering both numerical error and measurement error
This article considers estimation of constant and time-varying coefficients
in nonlinear ordinary differential equation (ODE) models where analytic
closed-form solutions are not available. The numerical solution-based nonlinear
least squares (NLS) estimator is investigated in this study. A numerical
algorithm such as the Runge--Kutta method is used to approximate the ODE
solution. The asymptotic properties are established for the proposed estimators
considering both numerical error and measurement error. The B-spline is used to
approximate the time-varying coefficients, and the corresponding asymptotic
theories in this case are investigated under the framework of the sieve
approach. Our results show that if the maximum step size of the -order
numerical algorithm goes to zero at a rate faster than , the
numerical error is negligible compared to the measurement error. This result
provides a theoretical guidance in selection of the step size for numerical
evaluations of ODEs. Moreover, we have shown that the numerical solution-based
NLS estimator and the sieve NLS estimator are strongly consistent. The sieve
estimator of constant parameters is asymptotically normal with the same
asymptotic co-variance as that of the case where the true ODE solution is
exactly known, while the estimator of the time-varying parameter has the
optimal convergence rate under some regularity conditions. The theoretical
results are also developed for the case when the step size of the ODE numerical
solver does not go to zero fast enough or the numerical error is comparable to
the measurement error. We illustrate our approach with both simulation studies
and clinical data on HIV viral dynamics.Comment: Published in at http://dx.doi.org/10.1214/09-AOS784 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Gradient matching methods for computational inference in mechanistic models for systems biology: a review and comparative analysis
Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs), is a challenging problem in contemporary systems biology. Conventional methods involve repeatedly solving the ODEs by numerical integration, which is computationally onerous and does not scale up to complex systems. Aimed at reducing the computational costs, new concepts based on gradient matching have recently been proposed in the computational statistics and machine learning literature. In a preliminary smoothing step, the time series data are interpolated; then, in a second step, the parameters of the ODEs are optimised so as to minimise some metric measuring the difference between the slopes of the tangents to the interpolants, and the time derivatives from the ODEs. In this way, the ODEs never have to be solved explicitly. This review provides a concise methodological overview of the current state-of-the-art methods for gradient matching in ODEs, followed by an empirical comparative evaluation based on a set of widely used and representative benchmark data
Blended General Linear Methods based on Boundary Value Methods in the GBDF family
Among the methods for solving ODE-IVPs, the class of General Linear Methods
(GLMs) is able to encompass most of them, ranging from Linear Multistep
Formulae (LMF) to RK formulae. Moreover, it is possible to obtain methods able
to overcome typical drawbacks of the previous classes of methods. For example,
order barriers for stable LMF and the problem of order reduction for RK
methods. Nevertheless, these goals are usually achieved at the price of a
higher computational cost. Consequently, many efforts have been made in order
to derive GLMs with particular features, to be exploited for their efficient
implementation. In recent years, the derivation of GLMs from particular
Boundary Value Methods (BVMs), namely the family of Generalized BDF (GBDF), has
been proposed for the numerical solution of stiff ODE-IVPs. In particular, this
approach has been recently developed, resulting in a new family of L-stable
GLMs of arbitrarily high order, whose theory is here completed and fully
worked-out. Moreover, for each one of such methods, it is possible to define a
corresponding Blended GLM which is equivalent to it from the point of view of
the stability and order properties. These blended methods, in turn, allow the
definition of efficient nonlinear splittings for solving the generated discrete
problems. A few numerical tests, confirming the excellent potential of such
blended methods, are also reported.Comment: 22 pages, 8 figure
A Hybrid Algorithm Based on Optimal Quadratic Spline Collocation and Parareal Deferred Correction for Parabolic PDEs
Parareal is a kind of time parallel numerical methods for time-dependent systems. In this paper, we consider a general linear parabolic PDE, use optimal quadratic spline collocation (QSC) method for the space discretization, and proceed with the parareal technique on the time domain. Meanwhile, deferred correction technique is also used to improve the accuracy during the iterations. In fact, the optimal QSC method is a correction of general QSC method. Along the temporal direction we embed the iterations of deferred correction into parareal to construct a hybrid method, parareal deferred correction (PDC) method. The error estimation is presented and the stability is analyzed. To save computational cost, we find out a simple way to balance the two kinds of iterations as much as possible. We also argue that the hybrid algorithm has better system efficiency and costs less running time. Numerical experiments by multicore computers are attached to exhibit the effectiveness of the hybrid algorithm
Probabilistic Numerics and Uncertainty in Computations
We deliver a call to arms for probabilistic numerical methods: algorithms for
numerical tasks, including linear algebra, integration, optimization and
solving differential equations, that return uncertainties in their
calculations. Such uncertainties, arising from the loss of precision induced by
numerical calculation with limited time or hardware, are important for much
contemporary science and industry. Within applications such as climate science
and astrophysics, the need to make decisions on the basis of computations with
large and complex data has led to a renewed focus on the management of
numerical uncertainty. We describe how several seminal classic numerical
methods can be interpreted naturally as probabilistic inference. We then show
that the probabilistic view suggests new algorithms that can flexibly be
adapted to suit application specifics, while delivering improved empirical
performance. We provide concrete illustrations of the benefits of probabilistic
numeric algorithms on real scientific problems from astrometry and astronomical
imaging, while highlighting open problems with these new algorithms. Finally,
we describe how probabilistic numerical methods provide a coherent framework
for identifying the uncertainty in calculations performed with a combination of
numerical algorithms (e.g. both numerical optimisers and differential equation
solvers), potentially allowing the diagnosis (and control) of error sources in
computations.Comment: Author Generated Postprint. 17 pages, 4 Figures, 1 Tabl
- …