456 research outputs found
Asymptotic Analysis of The SVD For The Truncated Hilbert Transform With Overlap
The truncated Hilbert transform with overlap H-T is an operator that arises in tomographic reconstruction from limited data, more precisely in the method of differentiated back-projection. Recent work [R. Al-Aifari and A. Katsevich, SIAM J. Math. Anal., 46 (2014), pp. 192213] has shown that the singular values of this operator accumulate at both zero and one. To better understand the properties of the operator and, in particular, the ill-posedness of the inverse problem associated with it, it is of interest to know the rates at which the singular values approach zero and one. In this paper, we exploit the property that H-T commutes with a second-order differential operator L-S and the global asymptotic behavior of its eigenfunctions to find the asymptotics of the singular values and singular functions of H-T
Stability estimates for the regularized inversion of the truncated Hilbert transform
In limited data computerized tomography, the 2D or 3D problem can be reduced
to a family of 1D problems using the differentiated backprojection (DBP)
method. Each 1D problem consists of recovering a compactly supported function
, where is a finite interval, from its
partial Hilbert transform data. When the Hilbert transform is measured on a
finite interval that only overlaps but does not cover
this inversion problem is known to be severely ill-posed [1].
In this paper, we study the reconstruction of restricted to the overlap
region . We show that with this restriction and by
assuming prior knowledge on the norm or on the variation of , better
stability with H\"older continuity (typical for mildly ill-posed problems) can
be obtained.Comment: added one remark, larger fonts for axis labels in figure
Correlation density matrices for 1- dimensional quantum chains based on the density matrix renormalization group
A useful concept for finding numerically the dominant correlations of a given
ground state in an interacting quantum lattice system in an unbiased way is the
correlation density matrix. For two disjoint, separated clusters, it is defined
to be the density matrix of their union minus the direct product of their
individual density matrices and contains all correlations between the two
clusters. We show how to extract from the correlation density matrix a general
overview of the correlations as well as detailed information on the operators
carrying long-range correlations and the spatial dependence of their
correlation functions. To determine the correlation density matrix, we
calculate the ground state for a class of spinless extended Hubbard models
using the density matrix renormalization group. This numerical method is based
on matrix product states for which the correlation density matrix can be
obtained straightforwardly. In an appendix, we give a detailed tutorial
introduction to our variational matrix product state approach for ground state
calculations for 1- dimensional quantum chain models. We show in detail how
matrix product states overcome the problem of large Hilbert space dimensions in
these models and describe all techniques which are needed for handling them in
practice.Comment: 50 pages, 34 figures, to be published in New Journal of Physic
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
Asymptotic Analysis of Inpainting via Universal Shearlet Systems
Recently introduced inpainting algorithms using a combination of applied
harmonic analysis and compressed sensing have turned out to be very successful.
One key ingredient is a carefully chosen representation system which provides
(optimally) sparse approximations of the original image. Due to the common
assumption that images are typically governed by anisotropic features,
directional representation systems have often been utilized. One prominent
example of this class are shearlets, which have the additional benefitallowing
faithful implementations. Numerical results show that shearlets significantly
outperform wavelets in inpainting tasks. One of those software packages,
www.shearlab.org, even offers the flexibility of usingdifferent parameter for
each scale, which is not yet covered by shearlet theory.
In this paper, we first introduce universal shearlet systems which are
associated with an arbitrary scaling sequence, thereby modeling the previously
mentioned flexibility. In addition, this novel construction allows for a smooth
transition between wavelets and shearlets and therefore enables us to analyze
them in a uniform fashion. For a large class of such scaling sequences, we
first prove that the associated universal shearlet systems form band-limited
Parseval frames for consisting of Schwartz functions.
Secondly, we analyze the performance for inpainting of this class of universal
shearlet systems within a distributional model situation using an
-analysis minimization algorithm for reconstruction. Our main result in
this part states that, provided the scaling sequence is comparable to the size
of the (scale-dependent) gap, nearly-perfect inpainting is achieved at
sufficiently fine scales
- …