578 research outputs found

    Discontinuous information in the worst case and randomized settings

    Full text link
    We believe that discontinuous linear information is never more powerful than continuous linear information for approximating continuous operators. We prove such a result in the worst case setting. In the randomized setting we consider compact linear operators defined between Hilbert spaces. In this case, the use of discontinuous linear information in the randomized setting cannot be much more powerful than continuous linear information in the worst case setting. These results can be applied when function evaluations are used even if function values are defined only almost everywhere

    Adaptive approximation of monotone functions

    Full text link
    We study the classical problem of approximating a non-decreasing function f:X→Yf: \mathcal{X} \to \mathcal{Y} in Lp(μ)L^p(\mu) norm by sequentially querying its values, for known compact real intervals X\mathcal{X}, Y\mathcal{Y} and a known probability measure μ\mu on \cX. For any function~ff we characterize the minimum number of evaluations of ff that algorithms need to guarantee an approximation f^\hat{f} with an Lp(μ)L^p(\mu) error below ϵ\epsilon after stopping. Unlike worst-case results that hold uniformly over all ff, our complexity measure is dependent on each specific function ff. To address this problem, we introduce GreedyBox, a generalization of an algorithm originally proposed by Novak (1992) for numerical integration. We prove that GreedyBox achieves an optimal sample complexity for any function ff, up to logarithmic factors. Additionally, we uncover results regarding piecewise-smooth functions. Perhaps as expected, the Lp(μ)L^p(\mu) error of GreedyBox decreases much faster for piecewise-C2C^2 functions than predicted by the algorithm (without any knowledge on the smoothness of ff). A simple modification even achieves optimal minimax approximation rates for such functions, which we compute explicitly. In particular, our findings highlight multiple performance gaps between adaptive and non-adaptive algorithms, smooth and piecewise-smooth functions, as well as monotone or non-monotone functions. Finally, we provide numerical experiments to support our theoretical results

    An Analysis of the Quasicontinuum Method

    Get PDF
    The aim of this paper is to present a streamlined and fully three-dimensional version of the quasicontinuum (QC) theory of Tadmor et al. and to analyze its accuracy and convergence characteristics. Specifically, we assess the effect of the summation rules on accuracy; we determine the rate of convergence of the method in the presence of strong singularities, such as point loads; and we assess the effect of the refinement tolerance, which controls the rate at which new nodes are inserted in the model, on the development of dislocation microstructures.Comment: 30 pages, 16 figures. To appear in Jornal of the Mechanics and Physics of Solid

    On sequential and parallel solution of initial value problems

    Get PDF
    AbstractWe deal with the solution of systems z′(x) = f(x, z(x)), x ϵ [0, 1], z(0) = η, where the function ƒ [0, 1] × Rs → Rs has r continuous bounded partial derivatives. We assume that available information about the problem consists of evaluations of n linear functionals at ƒ. If an adaptive choice of these functionals is allowed (which is suitable for sequential processing), then the minimal error of an algorithm is of order n−(r+1), for any dimension s. We show that if nonadaptive information (well-suited for parallel computation) is used, then the minimal error cannot be essentially less than n−(r+1)(s+1). Thus, adaption is significantly better, and the advantage of using it grows with s. This yields that the ε-complexity in sequential computation is smaller for adaptive information. For parallel computation, nonadaptive information is more efficient only if the number of processors is very large, depending exponentially on the dimension s. We conclude that using parallelism by computing the information nonadaptively is not feasible

    Some Results on the Complexity of Numerical Integration

    Full text link
    This is a survey (21 pages, 124 references) written for the MCQMC 2014 conference in Leuven, April 2014. We start with the seminal paper of Bakhvalov (1959) and end with new results on the curse of dimension and on the complexity of oscillatory integrals. Some small errors of earlier versions are corrected
    • …
    corecore