314 research outputs found
Dilational interpolatory inequalities
Operationally, index functions of variable Hilbert scales can be viewed as generators for families of spaces and norms and, thereby, associated scales of interpolatory inequalities. Using one-parameter families of index functions based on the dilations of given index functions, new classes of interpolatory inequalities, dilational interpolatory inequalities (DII), are constructed. They have ordinary Hilbert scales (OHS) interpolatory inequalities as special cases. They represent a precise and concise subset of variable Hilbert scales interpolatory inequalities appropriate for deriving error estimates for peak sharpening deconvolution. Only for Gaussian and Lorentzian deconvolution do the DIIs take the standard form of OHS interpolatory inequalities. For other types of deconvolution, such as a Voigt, which is the convolution of a Gaussian with a Lorentzian, the DIIs yield a new class of interpolatory inequality. An analysis of deconvolution peak sharpening is used to illustrate the role of DIIs in deriving appropriate error estimates.They also wish to acknowledge the support of the Radon Institute of
Computational and Applied Mathematics, where the initial draft of this paper was
finalized
Spatial and temporal rainfall approximation using additive models
We investigate the approximation of Rainfall data using additive models . In our model, space and elevation are treated as the predictor variables. The multi-dimensional approximation problem is demonstrated using rainfall data collected by ACTEW Corporation
Smolyak's algorithm: A powerful black box for the acceleration of scientific computations
We provide a general discussion of Smolyak's algorithm for the acceleration
of scientific computations. The algorithm first appeared in Smolyak's work on
multidimensional integration and interpolation. Since then, it has been
generalized in multiple directions and has been associated with the keywords:
sparse grids, hyperbolic cross approximation, combination technique, and
multilevel methods. Variants of Smolyak's algorithm have been employed in the
computation of high-dimensional integrals in finance, chemistry, and physics,
in the numerical solution of partial and stochastic differential equations, and
in uncertainty quantification. Motivated by this broad and ever-increasing
range of applications, we describe a general framework that summarizes
fundamental results and assumptions in a concise application-independent
manner
Piecewise polynomial approximation of probability density functions with application to uncertainty quantification for stochastic PDEs
The probability density function (PDF) associated with a given set of samples
is approximated by a piecewise-linear polynomial constructed with respect to a
binning of the sample space. The kernel functions are a compactly supported
basis for the space of such polynomials, i.e. finite element hat functions,
that are centered at the bin nodes rather than at the samples, as is the case
for the standard kernel density estimation approach. This feature naturally
provides an approximation that is scalable with respect to the sample size. On
the other hand, unlike other strategies that use a finite element approach, the
proposed approximation does not require the solution of a linear system. In
addition, a simple rule that relates the bin size to the sample size eliminates
the need for bandwidth selection procedures. The proposed density estimator has
unitary integral, does not require a constraint to enforce positivity, and is
consistent. The proposed approach is validated through numerical examples in
which samples are drawn from known PDFs. The approach is also used to determine
approximations of (unknown) PDFs associated with outputs of interest that
depend on the solution of a stochastic partial differential equation
- …