332 research outputs found

    Improved Sobolev embeddings, profile decomposition, and concentration-compactness for fractional Sobolev spaces

    Full text link
    We obtain an improved Sobolev inequality in H^s spaces involving Morrey norms. This refinement yields a direct proof of the existence of optimizers and the compactness up to symmetry of optimizing sequences for the usual Sobolev embedding. More generally, it allows to derive an alternative, more transparent proof of the profile decomposition in H^s obtained in [P. Gerard, ESAIM 1998] using the abstract approach of dislocation spaces developed in [K. Tintarev & K. H. Fieseler, Imperial College Press 2007]. We also analyze directly the local defect of compactness of the Sobolev embedding in terms of measures in the spirit of [P. L. Lions, Rev. Mat. Iberoamericana 1985]. As a model application, we study the asymptotic limit of a family of subcritical problems, obtaining concentration results for the corresponding optimizers which are well known when s is an integer ([O. Rey, Manuscripta math. 1989; Z.-C. Han, Ann. Inst. H. Poincare Anal. Non Lineaire 1991], [K. S. Chou & D. Geng, Differential Integral Equations 2000]).Comment: 33 page

    An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

    Full text link
    We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1: small correction in proof (but not statement of) lemma 3.15; description of Besov spaces in intro and app A clarified (and corrected); smaller pointsize (making 30 instead of 38 pages

    Convergence rates for variational regularization of inverse problems in exponential families

    Get PDF
    We consider statistical inverse problems with statistical noise. By using regularization methods one can approximate the true solution of the inverse problem by a regularized solution. The previous investigation of convergence rates for variational regularization with Poisson and empirical process data is shown to be suboptimal. In this thesis we obtain improved convergence rates for variational regularization methods of nonlinear ill-posed inverse problems with certain stochastic noise models described by exponential families and derive better reconstruction error bounds by applying deviation inequalities for stochastic process in some function spaces. Furthermore, we also consider iteratively regularized Newton-method as an alternative while the operator is non-linear. Due to the difficulty of deriving suitable deviation inequalities for stochastic processes in some function spaces, we are currently not able to obtain optimal convergence rates for variational regularization such that we state our desired result as a conjecture. If our conjecture holds true, then we can immediately obtain our desired results

    Besov regularity for operator equations on patchwise smooth manifolds

    Full text link
    We study regularity properties of solutions to operator equations on patchwise smooth manifolds Ω\partial\Omega such as, e.g., boundaries of polyhedral domains ΩR3\Omega \subset \mathbb{R}^3. Using suitable biorthogonal wavelet bases Ψ\Psi, we introduce a new class of Besov-type spaces BΨ,qα(Lp(Ω))B_{\Psi,q}^\alpha(L_p(\partial \Omega)) of functions u ⁣:ΩCu\colon\partial\Omega\rightarrow\mathbb{C}. Special attention is paid on the rate of convergence for best nn-term wavelet approximation to functions in these scales since this determines the performance of adaptive numerical schemes. We show embeddings of (weighted) Sobolev spaces on Ω\partial\Omega into BΨ,τα(Lτ(Ω))B_{\Psi,\tau}^\alpha(L_\tau(\partial \Omega)), 1/τ=α/2+1/21/\tau=\alpha/2 + 1/2, which lead us to regularity assertions for the equations under consideration. Finally, we apply our results to a boundary integral equation of the second kind which arises from the double layer ansatz for Dirichlet problems for Laplace's equation in Ω\Omega.Comment: 42 pages, 3 figures, updated after peer review. Preprint: Bericht Mathematik Nr. 2013-03 des Fachbereichs Mathematik und Informatik, Universit\"at Marburg. To appear in J. Found. Comput. Mat

    Adaptive wavelet methods for a class of stochastic partial differential equations

    Get PDF
    An abstract interpretation of Rothe’s method for the discretization of evolution equations is derived. The error propagation is analyzed and condition on the tolerances are proven, which ensure convergence in the case of inexact operator evaluations. Substantiating the abstract analysis, the linearly implicit Euler scheme on a uniform time discretization is applied to a class of semi-linear parabolic stochastic partial differential equations. Using the existence of asymptotically optimal adaptive solver for the elliptic subproblems, sufficient conditions for convergence with corresponding convergence orders also in the case of inexact operator evaluations are shown. Upper complexity bounds are proven in the deterministic case. The stochastic Poisson equation with random right hand sides is used as model equation for the elliptic subproblems. The random right hand sides are introduced based on wavelet decompositions and a stochastic model that, as is shown, provides an explicit regularity control of their realizations and induces sparsity of the wavelet coefficients. For this class of equations, upper error bounds for best N-term wavelet approximation on different bounded domains are proven. They show that the use of nonlinear (adaptive) methods over uniform linear methods is justified whenever sparsity is present, which in particularly holds true on Lipschitz domains of two or three dimensions. By providing sparse variants of general Gaussian random functions, the class of random functions derived from the stochastic model is interesting on its own. The regularity of the random functions is analyzed in certain smoothness spaces, as well as linear and nonlinear approximation results are proven, which clarify their applicability for numerical experiments

    Tensor-Sparsity of Solutions to High-Dimensional Elliptic Partial Differential Equations

    Full text link
    A recurring theme in attempts to break the curse of dimensionality in the numerical approximations of solutions to high-dimensional partial differential equations (PDEs) is to employ some form of sparse tensor approximation. Unfortunately, there are only a few results that quantify the possible advantages of such an approach. This paper introduces a class Σn\Sigma_n of functions, which can be written as a sum of rank-one tensors using a total of at most nn parameters and then uses this notion of sparsity to prove a regularity theorem for certain high-dimensional elliptic PDEs. It is shown, among other results, that whenever the right-hand side ff of the elliptic PDE can be approximated with a certain rate O(nr)\mathcal{O}(n^{-r}) in the norm of H1{\mathrm H}^{-1} by elements of Σn\Sigma_n, then the solution uu can be approximated in H1{\mathrm H}^1 from Σn\Sigma_n to accuracy O(nr)\mathcal{O}(n^{-r'}) for any r(0,r)r'\in (0,r). Since these results require knowledge of the eigenbasis of the elliptic operator considered, we propose a second "basis-free" model of tensor sparsity and prove a regularity theorem for this second sparsity model as well. We then proceed to address the important question of the extent such regularity theorems translate into results on computational complexity. It is shown how this second model can be used to derive computational algorithms with performance that breaks the curse of dimensionality on certain model high-dimensional elliptic PDEs with tensor-sparse data.Comment: 41 pages, 1 figur
    corecore