1,204 research outputs found

    Anisotropic smoothness classes : from finite element approximation to image models

    Get PDF
    We propose and study quantitative measures of smoothness which are adapted to anisotropic features such as edges in images or shocks in PDE's. These quantities govern the rate of approximation by adaptive finite elements, when no constraint is imposed on the aspect ratio of the triangles, the simplest examples of such quantities are based on the determinant of the hessian of the function to be approximated. Since they are not semi-norms, these quantities cannot be used to define linear function spaces. We show that they can be well defined by mollification when the function to be approximated has jump discontinuities along piecewise smooth curves. This motivates for using them in image processing as an alternative to the frequently used record variation semi-norm which does not account for the geometric smoothness of the edges.Comment: 24 pages, 2 figure

    Approximation and learning by greedy algorithms

    Full text link
    We consider the problem of approximating a given element ff from a Hilbert space H\mathcal{H} by means of greedy algorithms and the application of such procedures to the regression problem in statistical learning theory. We improve on the existing theory of convergence rates for both the orthogonal greedy algorithm and the relaxed greedy algorithm, as well as for the forward stepwise projection algorithm. For all these algorithms, we prove convergence results for a variety of function classes and not simply those that are related to the convex hull of the dictionary. We then show how these bounds for convergence rates lead to a new theory for the performance of greedy algorithms in learning. In particular, we build upon the results in [IEEE Trans. Inform. Theory 42 (1996) 2118--2132] to construct learning algorithms based on greedy approximations which are universally consistent and provide provable convergence rates for large classes of functions. The use of greedy algorithms in the context of learning is very appealing since it greatly reduces the computational burden when compared with standard model selection using general dictionaries.Comment: Published in at http://dx.doi.org/10.1214/009053607000000631 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Future Prospects: Deep Imaging of Galaxy Outskirts using Telescopes Large and Small

    Full text link
    The Universe is almost totally unexplored at low surface brightness levels. In spite of great progress in the construction of large telescopes and improvements in the sensitivity of detectors, the limiting surface brightness of imaging observations has remained static for about forty years. Recent technical advances have at last begun to erode the barriers preventing progress. In this Chapter we describe the technical challenges to low surface brightness imaging, describe some solutions, and highlight some relevant observations that have been undertaken recently with both large and small telescopes. Our main focus will be on discoveries made with the Dragonfly Telephoto Array (Dragonfly), which is a new telescope concept designed to probe the Universe down to hitherto unprecedented low surface brightness levels. We conclude by arguing that these discoveries are probably only scratching the surface of interesting phenomena that are observable when the Universe is explored at low surface brightness levels.Comment: 27 pages, 10 figures, Invited review, Book chapter in "Outskirts of Galaxies", Eds. J. H. Knapen, J. C. Lee and A. Gil de Paz, Astrophysics and Space Science Library, Springer, in pres

    Precision Tests of the Standard Model

    Get PDF
    30 páginas, 11 figuras, 11 tablas.-- Comunicación presentada al 25º Winter Meeting on Fundamental Physics celebrado del 3 al 8 de MArzo de 1997 en Formigal (España).Precision measurements of electroweak observables provide stringent tests of the Standard Model structure and an accurate determination of its parameters. An overview of the present experimental status is presented.This work has been supported in part by CICYT (Spain) under grant No. AEN-96-1718.Peer reviewe

    Sparse Deterministic Approximation of Bayesian Inverse Problems

    Get PDF
    We present a parametric deterministic formulation of Bayesian inverse problems with input parameter from infinite dimensional, separable Banach spaces. In this formulation, the forward problems are parametric, deterministic elliptic partial differential equations, and the inverse problem is to determine the unknown, parametric deterministic coefficients from noisy observations comprising linear functionals of the solution. We prove a generalized polynomial chaos representation of the posterior density with respect to the prior measure, given noisy observational data. We analyze the sparsity of the posterior density in terms of the summability of the input data's coefficient sequence. To this end, we estimate the fluctuations in the prior. We exhibit sufficient conditions on the prior model in order for approximations of the posterior density to converge at a given algebraic rate, in terms of the number NN of unknowns appearing in the parameteric representation of the prior measure. Similar sparsity and approximation results are also exhibited for the solution and covariance of the elliptic partial differential equation under the posterior. These results then form the basis for efficient uncertainty quantification, in the presence of data with noise

    A general wavelet-based profile decomposition in the critical embedding of function spaces

    Get PDF
    We characterize the lack of compactness in the critical embedding of functions spaces XYX\subset Y having similar scaling properties in the following terms : a sequence (un)n0(u_n)_{n\geq 0} bounded in XX has a subsequence that can be expressed as a finite sum of translations and dilations of functions (ϕl)l>0(\phi_l)_{l>0} such that the remainder converges to zero in YY as the number of functions in the sum and nn tend to ++\infty. Such a decomposition was established by G\'erard for the embedding of the homogeneous Sobolev space X=H˙sX=\dot H^s into the Y=LpY=L^p in dd dimensions with 0<s=d/2d/p0<s=d/2-d/p, and then generalized by Jaffard to the case where XX is a Riesz potential space, using wavelet expansions. In this paper, we revisit the wavelet-based profile decomposition, in order to treat a larger range of examples of critical embedding in a hopefully simplified way. In particular we identify two generic properties on the spaces XX and YY that are of key use in building the profile decomposition. These properties may then easily be checked for typical choices of XX and YY satisfying critical embedding properties. These includes Sobolev, Besov, Triebel-Lizorkin, Lorentz, H\"older and BMO spaces.Comment: 24 page

    Error-bounds for Gaussian Quadrature and Weighted-L1 Polynomial Approximation

    Get PDF
    Error bounds for Gaussian quadrature are given in terms of the number of quadrature points and smoothness properties of the function whose integral is being approximated. An intermediate step involves a weighted-L\u27 polynomial approximation problem which is treated in a more general context than that specifically required to bound the Gaussian quadrature error
    corecore