694 research outputs found

    On Questions of Decay and Existence for the Viscous Camassa-Holm Equations

    Get PDF
    We consider the viscous nn-dimensional Camassa-Holm equations, with n=2,3,4n=2,3,4 in the whole space. We establish existence and regularity of the solutions and study the large time behavior of the solutions in several Sobolev spaces. We first show that if the data is only in L2L^2 then the solution decays without a rate and that this is the best that can be expected for data in L2L^2. For solutions with data in Hm∩L1H^m\cap L^1 we obtain decay at an algebraic rate which is optimal in the sense that it coincides with the rate of the underlying linear part.Comment: 36 pages, to appear. This version contains corrected typo

    A deconvolution approach to estimation of a common shape in a shifted curves model

    Get PDF
    This paper considers the problem of adaptive estimation of a mean pattern in a randomly shifted curve model. We show that this problem can be transformed into a linear inverse problem, where the density of the random shifts plays the role of a convolution operator. An adaptive estimator of the mean pattern, based on wavelet thresholding is proposed. We study its consistency for the quadratic risk as the number of observed curves tends to infinity, and this estimator is shown to achieve a near-minimax rate of convergence over a large class of Besov balls. This rate depends both on the smoothness of the common shape of the curves and on the decay of the Fourier coefficients of the density of the random shifts. Hence, this paper makes a connection between mean pattern estimation and the statistical analysis of linear inverse problems, which is a new point of view on curve registration and image warping problems. We also provide a new method to estimate the unknown random shifts between curves. Some numerical experiments are given to illustrate the performances of our approach and to compare them with another algorithm existing in the literature

    Exact oracle inequality for a sharp adaptive kernel density estimator

    Get PDF
    In one-dimensional density estimation on i.i.d. observations we suggest an adaptive cross-validation technique for the selection of a kernel estimator. This estimator is both asymptotic MISE-efficient with respect to the monotone oracle, and sharp minimax-adaptive over the whole scale of Sobolev spaces with smoothness index greater than 1/2. The proof of the central concentration inequality avoids "chaining" and relies on an additive decomposition of the empirical processes involved

    Structure-Blind Signal Recovery

    Get PDF
    We consider the problem of recovering a signal observed in Gaussian noise. If the set of signals is convex and compact, and can be specified beforehand, one can use classical linear estimators that achieve a risk within a constant factor of the minimax risk. However, when the set is unspecified, designing an estimator that is blind to the hidden structure of the signal remains a challenging problem. We propose a new family of estimators to recover signals observed in Gaussian noise. Instead of specifying the set where the signal lives, we assume the existence of a well-performing linear estimator. Proposed estimators enjoy exact oracle inequalities and can be efficiently computed through convex optimization. We present several numerical illustrations that show the potential of the approach

    Nonparametric estimation of the volatility function in a high-frequency model corrupted by noise

    Get PDF
    We consider the models Y_{i,n}=\int_0^{i/n} \sigma(s)dW_s+\tau(i/n)\epsilon_{i,n}, and \tilde Y_{i,n}=\sigma(i/n)W_{i/n}+\tau(i/n)\epsilon_{i,n}, i=1,...,n, where W_t denotes a standard Brownian motion and \epsilon_{i,n} are centered i.i.d. random variables with E(\epsilon_{i,n}^2)=1 and finite fourth moment. Furthermore, \sigma and \tau are unknown deterministic functions and W_t and (\epsilon_{1,n},...,\epsilon_{n,n}) are assumed to be independent processes. Based on a spectral decomposition of the covariance structures we derive series estimators for \sigma^2 and \tau^2 and investigate their rate of convergence of the MISE in dependence of their smoothness. To this end specific basis functions and their corresponding Sobolev ellipsoids are introduced and we show that our estimators are optimal in minimax sense. Our work is motivated by microstructure noise models. Our major finding is that the microstructure noise \epsilon_{i,n} introduces an additionally degree of ill-posedness of 1/2; irrespectively of the tail behavior of \epsilon_{i,n}. The method is illustrated by a small numerical study.Comment: 5 figures, corrected references, minor change

    An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

    Full text link
    We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1: small correction in proof (but not statement of) lemma 3.15; description of Besov spaces in intro and app A clarified (and corrected); smaller pointsize (making 30 instead of 38 pages

    Deep Limits of Residual Neural Networks

    Full text link
    Neural networks have been very successful in many applications; we often, however, lack a theoretical understanding of what the neural networks are actually learning. This problem emerges when trying to generalise to new data sets. The contribution of this paper is to show that, for the residual neural network model, the deep layer limit coincides with a parameter estimation problem for a nonlinear ordinary differential equation. In particular, whilst it is known that the residual neural network model is a discretisation of an ordinary differential equation, we show convergence in a variational sense. This implies that optimal parameters converge in the deep layer limit. This is a stronger statement than saying for a fixed parameter the residual neural network model converges (the latter does not in general imply the former). Our variational analysis provides a discrete-to-continuum Γ\Gamma-convergence result for the objective function of the residual neural network training step to a variational problem constrained by a system of ordinary differential equations; this rigorously connects the discrete setting to a continuum problem

    Diffusion equations and inverse problems regularization.

    Get PDF
    The present thesis can be split into two dfferent parts: The first part mainly deals with the porous and fast diffusion equations. Chapter 2 presents these equations in the Euclidean setting highlighting the technical issues that arise when trying to extend results in a Riemannian setting. Chapter 3 is devoted to the construction of exhaustion and cut-o_ functions with controlled gradient and Laplacian, on manifolds with Ricci curvature bounded from below by a (possibly unbounded) nonpositive function of the distance from a fixed reference point, and without any assumptions on the topology or the injectivity radius. The cut-offs are then applied to the study of the fast and porous media diffusion, of Lq-properties of the gradient and of the selfadjointness of Schrödinger-type operators. The second part is concerned with inverse problems regularization applied to image deblurring. In Chapter 5 new variants of the Tikhonov filter method, called fractional and weighted Tikhonov, are presented alongside their saturation properties and converse results on their convergence rates. New iterated fractional Tikhonov regularization methods are then introduced. In Chapter 6 the modified linearized Bregman algorithm is investigated. It is showed that the standard approach based on the block circulant circulant block preconditioner may provide low quality restored images and different preconditioning strategies are then proposed, which improve the quality of the restoration
    • …
    corecore