103,274 research outputs found

    Brownian Confidence Bands on Monte Carlo Output

    Get PDF
    International audienceWhen considering a Monte Carlo estimation procedure, the path produced by successive partial estimates is often used as a guide for informal convergence diagnostics. However the confidence region associated with that path cannot be derived simplistically from the confidence interval for the estimate itself. An asymptotically correct approach can be based on the Brownian motion approximation of the path, but no exact formula for the corresponding area-minimizing confidence region is yet known. We construct proxy regions based on local time arguments and consider numerical approximations. These are then available for a more incisive assessment of the Monte Carlo procedure and thence of the estimate itself

    Sieve-based confidence intervals and bands for L\'{e}vy densities

    Full text link
    The estimation of the L\'{e}vy density, the infinite-dimensional parameter controlling the jump dynamics of a L\'{e}vy process, is considered here under a discrete-sampling scheme. In this setting, the jumps are latent variables, the statistical properties of which can be assessed when the frequency and time horizon of observations increase to infinity at suitable rates. Nonparametric estimators for the L\'{e}vy density based on Grenander's method of sieves was proposed in Figueroa-L\'{o}pez [IMS Lecture Notes 57 (2009) 117--146]. In this paper, central limit theorems for these sieve estimators, both pointwise and uniform on an interval away from the origin, are obtained, leading to pointwise confidence intervals and bands for the L\'{e}vy density. In the pointwise case, our estimators converge to the L\'{e}vy density at a rate that is arbitrarily close to the rate of the minimax risk of estimation on smooth L\'{e}vy densities. In the case of uniform bands and discrete regular sampling, our results are consistent with the case of density estimation, achieving a rate of order arbitrarily close to log1/2(n)n1/3\log^{-1/2}(n)\cdot n^{-1/3}, where nn is the number of observations. The convergence rates are valid, provided that ss is smooth enough and that the time horizon TnT_n and the dimension of the sieve are appropriately chosen in terms of nn.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ286 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal

    Get PDF
    In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches.Comment: This is a final version accepted by CVPR 201

    New Goodness-of-Fit Tests and their Application toNonparametric Confidence Sets

    Get PDF
    Suppose one observes a process V on the unit interval, wheredV(t) = f(t) + dW(t) with an unknown function f and standard Brownian motion W. We propose a particular test of one-point hypotheses about f which is based on suitably standardized increments of V.This test is shown to have desirable consistency properties if, for instance, fis restricted to various Hölder smoothness classes of functions. Thetest is mimicked in the context of nonparametric density estimation,nonparametric regression and interval censored data. Under shaperestrictions on the parameter f such as monotonicity or convexity, weobtain confidence sets for f adapting to its unknown smoothness
    corecore