193 research outputs found

    Detection of Edges in Spectral Data II. Nonlinear Enhancement

    Get PDF
    We discuss a general framework for recovering edges in piecewise smooth functions with finitely many jump discontinuities, where [f](x):=f(x+)f(x)0[f](x):=f(x+)-f(x-) \neq 0. Our approach is based on two main aspects--localization using appropriate concentration kernels and separation of scales by nonlinear enhancement. To detect such edges, one employs concentration kernels, Kϵ()K_\epsilon(\cdot), depending on the small scale ϵ\epsilon. It is shown that odd kernels, properly scaled, and admissible (in the sense of having small W1,W^{-1,\infty}-moments of order O(ϵ){\cal O}(\epsilon)) satisfy Kϵf(x)=[f](x)+O(ϵ)K_\epsilon*f(x) = [f](x) +{\cal O}(\epsilon), thus recovering both the location and amplitudes of all edges.As an example we consider general concentration kernels of the form KNσ(t)=σ(k/N)sinktK^\sigma_N(t)=\sum\sigma(k/N)\sin kt to detect edges from the first 1/ϵ=N1/\epsilon=N spectral modes of piecewise smooth f's. Here we improve in generality and simplicity over our previous study in [A. Gelb and E. Tadmor, Appl. Comput. Harmon. Anal., 7 (1999), pp. 101-135]. Both periodic and nonperiodic spectral projections are considered. We identify, in particular, a new family of exponential factors, σexp()\sigma^{exp}(\cdot), with superior localization properties. The other aspect of our edge detection involves a nonlinear enhancement procedure which is based on separation of scales between the edges, where Kϵf(x)[f](x)0K_\epsilon*f(x)\sim [f](x) \neq 0, and the smooth regions where Kϵf=O(ϵ)0K_\epsilon*f = {\cal O}(\epsilon) \sim 0. Numerical examples demonstrate that by coupling concentration kernels with nonlinear enhancement one arrives at effective edge detectors

    Needlet algorithms for estimation in inverse problems

    Full text link
    We provide a new algorithm for the treatment of inverse problems which combines the traditional SVD inversion with an appropriate thresholding technique in a well chosen new basis. Our goal is to devise an inversion procedure which has the advantages of localization and multiscale analysis of wavelet representations without losing the stability and computability of the SVD decompositions. To this end we utilize the construction of localized frames (termed "needlets") built upon the SVD bases. We consider two different situations: the "wavelet" scenario, where the needlets are assumed to behave similarly to true wavelets, and the "Jacobi-type" scenario, where we assume that the properties of the frame truly depend on the SVD basis at hand (hence on the operator). To illustrate each situation, we apply the estimation algorithm respectively to the deconvolution problem and to the Wicksell problem. In the latter case, where the SVD basis is a Jacobi polynomial basis, we show that our scheme is capable of achieving rates of convergence which are optimal in the L2L_2 case, we obtain interesting rates of convergence for other LpL_p norms which are new (to the best of our knowledge) in the literature, and we also give a simulation study showing that the NEED-D estimator outperforms other standard algorithms in almost all situations.Comment: Published at http://dx.doi.org/10.1214/07-EJS014 in the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Some remarks on filtered polynomial interpolation at Chebyshev nodes

    Get PDF
    The present paper concerns filtered de la Vallée Poussin (VP) interpolation at the Chebyshev nodes of the four kinds. This approximation model is interesting for applications because it combines the advantages of the classical Lagrange polynomial approximation (interpolation and polynomial preserving) with the ones of filtered approximation (uniform boundedness of the Lebesgue constants and reduction of the Gibbs phenomenon). Here we focus on some additional features that are useful in the applications of filtered VP interpolation. In particular, we analyze the simultaneous approximation provided by the derivatives of the VP interpolation polynomials. Moreover, we state the uniform boundedness of VP approximation operators in some Sobolev and Hölder-Zygmund spaces where several integro-differential models are uniquely and stably solvable

    The Kernel Polynomial Method

    Full text link
    Efficient and stable algorithms for the calculation of spectral quantities and correlation functions are some of the key tools in computational condensed matter physics. In this article we review basic properties and recent developments of Chebyshev expansion based algorithms and the Kernel Polynomial Method. Characterized by a resource consumption that scales linearly with the problem dimension these methods enjoyed growing popularity over the last decade and found broad application not only in physics. Representative examples from the fields of disordered systems, strongly correlated electrons, electron-phonon interaction, and quantum spin systems we discuss in detail. In addition, we illustrate how the Kernel Polynomial Method is successfully embedded into other numerical techniques, such as Cluster Perturbation Theory or Monte Carlo simulation.Comment: 32 pages, 17 figs; revised versio

    NEED-VD: a second-generation wavelet algorithm for estimation in inverse problems

    Get PDF
    We provide a new algorithm for the treatment of inverse problems which combines the traditional SVD inversion with an appropriate thresholding technique in a well chosen new basis. Our goal is to devise an inversion procedure which has the advantages of localization and multiscale analysis of wavelet representations without losing the stability and computability of the SVD decompositions. To this end we utilize the construction of localized frames (termed ``needlets") built upon the SVD bases. We consider two different situations : the ``wavelet" scenario, where the needlets are assumed to behave similarly to true wavelets, and the ``Jacobi-type" scenario, where we assume that the properties of the frame truly depend on the SVD basis at hand (hence on the operator). To illustrate each situation, we apply the estimation algorithm respectively to the deconvolution problem and to the Wicksell problem. In the latter case, where the SVD basis is a Jacobi polynomial basis, we show that our scheme is capable of achieving rates of convergence which are optimal in the L2L_2 case, we obtain interesting rates of convergence for other LpL_p norms which are new (to the best of our knowledge) in the literature, and we also give a simulation study showing that the NEED-VD estimator outperforms other standard algorithms in almost all situations

    Applications of classical approximation theory to periodic basis function networks and computational harmonic analysis

    Get PDF
    In this paper, we describe a novel approach to classical approximation theory of periodic univariate and multivariate functions by trigonometric polynomials. While classical wisdom holds that such approximation is too sensitive to the lack of smoothness of the target functions at isolated points, our constructions show how to overcome this problem. We describe applications to approximation by periodic basis function networks, and indicate further research in the direction of Jacobi expansion and approximation on the Euclidean sphere. While the paper is mainly intended to be a survey of our recent research in these directions, several results are proved for the first time here

    Wavelet and Multiscale Methods

    Get PDF
    Various scientific models demand finer and finer resolutions of relevant features. Paradoxically, increasing computational power serves to even heighten this demand. Namely, the wealth of available data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information leads to tasks that are not tractable by standard numerical techniques. The last decade has seen the emergence of several new computational methodologies to address this situation. Their common features are the nonlinearity of the solution methods as well as the ability of separating solution characteristics living on different length scales. Perhaps the most prominent examples lie in multigrid methods and adaptive grid solvers for partial differential equations. These have substantially advanced the frontiers of computability for certain problem classes in numerical analysis. Other highly visible examples are: regression techniques in nonparametric statistical estimation, the design of universal estimators in the context of mathematical learning theory and machine learning; the investigation of greedy algorithms in complexity theory, compression techniques and encoding in signal and image processing; the solution of global operator equations through the compression of fully populated matrices arising from boundary integral equations with the aid of multipole expansions and hierarchical matrices; attacking problems in high spatial dimensions by sparse grid or hyperbolic wavelet concepts. This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computation and to promote the exchange of ideas emerging in various disciplines
    corecore