60 research outputs found
Asymptotic Analysis of Inpainting via Universal Shearlet Systems
Recently introduced inpainting algorithms using a combination of applied
harmonic analysis and compressed sensing have turned out to be very successful.
One key ingredient is a carefully chosen representation system which provides
(optimally) sparse approximations of the original image. Due to the common
assumption that images are typically governed by anisotropic features,
directional representation systems have often been utilized. One prominent
example of this class are shearlets, which have the additional benefitallowing
faithful implementations. Numerical results show that shearlets significantly
outperform wavelets in inpainting tasks. One of those software packages,
www.shearlab.org, even offers the flexibility of usingdifferent parameter for
each scale, which is not yet covered by shearlet theory.
In this paper, we first introduce universal shearlet systems which are
associated with an arbitrary scaling sequence, thereby modeling the previously
mentioned flexibility. In addition, this novel construction allows for a smooth
transition between wavelets and shearlets and therefore enables us to analyze
them in a uniform fashion. For a large class of such scaling sequences, we
first prove that the associated universal shearlet systems form band-limited
Parseval frames for consisting of Schwartz functions.
Secondly, we analyze the performance for inpainting of this class of universal
shearlet systems within a distributional model situation using an
-analysis minimization algorithm for reconstruction. Our main result in
this part states that, provided the scaling sequence is comparable to the size
of the (scale-dependent) gap, nearly-perfect inpainting is achieved at
sufficiently fine scales
Deriving RIP sensing matrices for sparsifying dictionaries
Compressive sensing involves the inversion of a mapping , where , is a sensing matrix, and is a sparisfying
dictionary. The restricted isometry property is a powerful sufficient condition
for the inversion that guarantees the recovery of high-dimensional sparse
vectors from their low-dimensional embedding into a Euclidean space via convex
optimization. However, determining whether has the restricted isometry
property for a given sparisfying dictionary is an NP-hard problem, hampering
the application of compressive sensing. This paper provides a novel approach to
resolving this problem. We demonstrate that it is possible to derive a sensing
matrix for any sparsifying dictionary with a high probability of retaining the
restricted isometry property. In numerical experiments with sensing matrices
for K-SVD, Parseval K-SVD, and wavelets, our recovery performance was
comparable to that of benchmarks obtained using Gaussian and Bernoulli random
sensing matrices for sparse vectors
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
We consider the problem of learning a low-dimensional signal model from a
collection of training samples. The mainstream approach would be to learn an
overcomplete dictionary to provide good approximations of the training samples
using sparse synthesis coefficients. This famous sparse model has a less well
known counterpart, in analysis form, called the cosparse analysis model. In
this new model, signals are characterised by their parsimony in a transformed
domain using an overcomplete (linear) analysis operator. We propose to learn an
analysis operator from a training corpus using a constrained optimisation
framework based on L1 optimisation. The reason for introducing a constraint in
the optimisation framework is to exclude trivial solutions. Although there is
no final answer here for which constraint is the most relevant constraint, we
investigate some conventional constraints in the model adaptation field and use
the uniformly normalised tight frame (UNTF) for this purpose. We then derive a
practical learning algorithm, based on projected subgradients and
Douglas-Rachford splitting technique, and demonstrate its ability to robustly
recover a ground truth analysis operator, when provided with a clean training
set, of sufficient size. We also find an analysis operator for images, using
some noisy cosparse signals, which is indeed a more realistic experiment. As
the derived optimisation problem is not a convex program, we often find a local
minimum using such variational methods. Some local optimality conditions are
derived for two different settings, providing preliminary theoretical support
for the well-posedness of the learning problem under appropriate conditions.Comment: 29 pages, 13 figures, accepted to be published in TS
Sparse Recovery Analysis of Preconditioned Frames via Convex Optimization
Orthogonal Matching Pursuit and Basis Pursuit are popular reconstruction
algorithms for recovery of sparse signals. The exact recovery property of both
the methods has a relation with the coherence of the underlying redundant
dictionary, i.e. a frame. A frame with low coherence provides better guarantees
for exact recovery. An equivalent formulation of the associated linear system
is obtained via premultiplication by a non-singular matrix. In view of bounds
that guarantee sparse recovery, it is very useful to generate the
preconditioner in such way that the preconditioned frame has low coherence as
compared to the original. In this paper, we discuss the impact of
preconditioning on sparse recovery. Further, we formulate a convex optimization
problem for designing the preconditioner that yields a frame with improved
coherence. In addition to reducing coherence, we focus on designing well
conditioned frames and numerically study the relationship between the condition
number of the preconditioner and the coherence of the new frame. Alongside
theoretical justifications, we demonstrate through simulations the efficacy of
the preconditioner in reducing coherence as well as recovering sparse signals.Comment: 9 pages, 5 Figure
Analysis of Inpainting via Clustered Sparsity and Microlocal Analysis
Recently, compressed sensing techniques in combination with both wavelet and
directional representation systems have been very effectively applied to the
problem of image inpainting. However, a mathematical analysis of these
techniques which reveals the underlying geometrical content is completely
missing. In this paper, we provide the first comprehensive analysis in the
continuum domain utilizing the novel concept of clustered sparsity, which
besides leading to asymptotic error bounds also makes the superior behavior of
directional representation systems over wavelets precise. First, we propose an
abstract model for problems of data recovery and derive error bounds for two
different recovery schemes, namely l_1 minimization and thresholding. Second,
we set up a particular microlocal model for an image governed by edges inspired
by seismic data as well as a particular mask to model the missing data, namely
a linear singularity masked by a horizontal strip. Applying the abstract
estimate in the case of wavelets and of shearlets we prove that -- provided the
size of the missing part is asymptotically to the size of the analyzing
functions -- asymptotically precise inpainting can be obtained for this model.
Finally, we show that shearlets can fill strictly larger gaps than wavelets in
this model.Comment: 49 pages, 9 Figure
- …