787 research outputs found
On Lipschitz continuity of nonlinear differential operators
In connection with approximations for nonlinear evolution equations, it is standard to assume that nonlinear terms are at least locally Lipschitz continuous. However, it is shown here that f = f(X,del sub u(X)) is Lipschitz continuous from the subspace W sup 1, infinity is a subset of L sub 2 into W sup 1,2, and maps W sup 2, infinity into W sup 1, infinity, if and only if f is affine with W sup 1, infinity coefficients. In fact, a local version of this claim is proved
Galerkin/Runge-Kutta discretizations for parabolic equations with time dependent coefficients
A new class of fully discrete Galerkin/Runge-Kutta methods is constructed and analyzed for linear parabolic initial boundary value problems with time dependent coefficients. Unlike any classical counterpart, this class offers arbitrarily high order convergence while significantly avoiding what has been called order reduction. In support of this claim, error estimates are proved, and computational results are presented. Additionally, since the time stepping equations involve coefficient matrices changing at each time step, a preconditioned iterative technique is used to solve the linear systems only approximately. Nevertheless, the resulting algorithm is shown to preserve the original convergence rate while using only the order of work required by the base scheme applied to a linear parabolic problem with time independent coefficients. Furthermore, it is noted that special Runge-Kutta methods allow computations to be performed in parallel so that the final execution time can be reduced to that of a low order method
Galerkin/Runge-Kutta discretizations for semilinear parabolic equations
A new class of fully discrete Galerkin/Runge-Kutta methods is constructed and analyzed for semilinear parabolic initial boundary value problems. Unlike any classical counterpart, this class offers arbitrarily high, optimal order convergence. In support of this claim, error estimates are proved, and computational results are presented. Furthermore, it is noted that special Runge-Kutta methods allow computations to be performed in parallel so that the final execution time can be reduced to that of a low order method
On implicit Runge-Kutta methods for parallel computations
Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given
Consistent Discretizations for Vanishing Regularization Solutions to Image Processing Problems
A model problem is used to represent a typical image processing problem of reconstructing an unknown in the face of incomplete data. A consistent discretization for a vanishing regularization solution is defined so that, in the absence of noise, limits first with respect to regularization and then with respect to grid refinement agree with a continuum counterpart defined in terms of a saddle point formulation. It is proved and demonstrated computationally for an artificial example and for a realistic example with magnetic resonance images that a mixed finite element discretization is consistent in the sense defined here. On the other hand, it is demonstrated computationally that a standard finite element discretization is not consistent, and the reason for the inconsistency is suggested in terms of theoretical and computational evidence
Recommended from our members
Calibration Methodology for the Scripps 13C/12C and 18O/16O stable Isotope program 1992-2018
This report details calibration method for measurements of 13C/12C and 18O/16O ratios of atmospheric CO2 by the Scripps CO2 program from 1992-2018. The method depends principally on repeat analysis of CO2 derived from a suite of high-pressure gas cylinders filled with compressed natural air pumped at La Jolla. The first set of three cylinders were given isotopic assignments in 1994 based on comparisons with material artifacts NBS16, NBS17, and NBS19. Six cylinders subsequently brought into service were assigned values by comparing directly or indirectly with this first set. A tenth cylinder with natural CO2 in air was obtained from MPI Jena. Aliquots of CO2 from these cylinders, which serve as secondary standards, were extracted into heat-sealed glass ampoules (“flame-off tubes”) before introduction into the mass spectrometer. Some of these ampoules have been stored for many years before analysis, allowing long-term isotopic drift of the cylinders to be quantified. All secondary standards contain natural levels of N2O. The method corrects for any detected drift, while also applying corrections for N2O interference, for isobaric interferences (“Craig correction”) and for an inter-lab offset identified in early comparisons with the isotope lab at the University of Groningen. The Jena cylinder was found to be drifting upwards in δ18O at a rate of +0.10 ‰ per decade. Five of the other nine cylinders were found to be drifting downwards in δ18O, δ13C, or both, at rates of up to -0.11‰ per decade. The secondary standards were applied uniformly across a transition to a new mass spectrometer in 2000, thereby establishing continuity across this transition. Results are presented also for instrumental precision based on replicate analyses of standards. Drift-corrected analyses of the Jena cylinder establishes offsets of +0.037 ‰ in δ13C and +0.041 ‰ in δ18O between the Scripps and JRAS isotopic scales (Scripps more positive)
Appropriate models for the management of infectious diseases
Background Mathematical models have become invaluable management tools for epidemiologists, both shedding light on the mechanisms underlying observed dynamics as well as making quantitative predictions on the effectiveness of different control measures. Here, we explain how substantial biases are introduced by two important, yet largely ignored, assumptions at the core of the vast majority of such models.
Methods and Findings First, we use analytical methods to show that (i) ignoring the latent period or (ii) making the common assumption of exponentially distributed latent and infectious periods (when including the latent period) always results in underestimating the basic reproductive ratio of an infection from outbreak data. We then proceed to illustrate these points by fitting epidemic models to data from an influenza outbreak. Finally, we document how such unrealistic a priori assumptions concerning model structure give rise to systematically overoptimistic predictions on the outcome of potential management options.
Conclusion This work aims to highlight that, when developing models for public health use, we need to pay careful attention to the intrinsic assumptions embedded within classical frameworks
Time-optimized high-resolution readout-segmented diffusion tensor imaging
Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min) generates results comparable to the un-regularized data with three averages (48 min). This significant reduction in scan time renders high resolution (1×1×2.5 mm3) diffusion tensor imaging of the entire brain applicable in a clinical context
- …