1,856 research outputs found
Shearlets and Optimally Sparse Approximations
Multivariate functions are typically governed by anisotropic features such as
edges in images or shock fronts in solutions of transport-dominated equations.
One major goal both for the purpose of compression as well as for an efficient
analysis is the provision of optimally sparse approximations of such functions.
Recently, cartoon-like images were introduced in 2D and 3D as a suitable model
class, and approximation properties were measured by considering the decay rate
of the error of the best -term approximation. Shearlet systems are to
date the only representation system, which provide optimally sparse
approximations of this model class in 2D as well as 3D. Even more, in contrast
to all other directional representation systems, a theory for compactly
supported shearlet frames was derived which moreover also satisfy this
optimality benchmark. This chapter shall serve as an introduction to and a
survey about sparse approximations of cartoon-like images by band-limited and
also compactly supported shearlet frames as well as a reference for the
state-of-the-art of this research field.Comment: in "Shearlets: Multiscale Analysis for Multivariate Data",
Birkh\"auser-Springe
Image registration with sparse approximations in parametric dictionaries
We examine in this paper the problem of image registration from the new
perspective where images are given by sparse approximations in parametric
dictionaries of geometric functions. We propose a registration algorithm that
looks for an estimate of the global transformation between sparse images by
examining the set of relative geometrical transformations between the
respective features. We propose a theoretical analysis of our registration
algorithm and we derive performance guarantees based on two novel important
properties of redundant dictionaries, namely the robust linear independence and
the transformation inconsistency. We propose several illustrations and insights
about the importance of these dictionary properties and show that common
properties such as coherence or restricted isometry property fail to provide
sufficient information in registration problems. We finally show with
illustrative experiments on simple visual objects and handwritten digits images
that our algorithm outperforms baseline competitor methods in terms of
transformation-invariant distance computation and classification
Understanding and Comparing Scalable Gaussian Process Regression for Big Data
As a non-parametric Bayesian model which produces informative predictive
distribution, Gaussian process (GP) has been widely used in various fields,
like regression, classification and optimization. The cubic complexity of
standard GP however leads to poor scalability, which poses challenges in the
era of big data. Hence, various scalable GPs have been developed in the
literature in order to improve the scalability while retaining desirable
prediction accuracy. This paper devotes to investigating the methodological
characteristics and performance of representative global and local scalable GPs
including sparse approximations and local aggregations from four main
perspectives: scalability, capability, controllability and robustness. The
numerical experiments on two toy examples and five real-world datasets with up
to 250K points offer the following findings. In terms of scalability, most of
the scalable GPs own a time complexity that is linear to the training size. In
terms of capability, the sparse approximations capture the long-term spatial
correlations, the local aggregations capture the local patterns but suffer from
over-fitting in some scenarios. In terms of controllability, we could improve
the performance of sparse approximations by simply increasing the inducing
size. But this is not the case for local aggregations. In terms of robustness,
local aggregations are robust to various initializations of hyperparameters due
to the local attention mechanism. Finally, we highlight that the proper hybrid
of global and local scalable GPs may be a promising way to improve both the
model capability and scalability for big data.Comment: 25 pages, 15 figures, preprint submitted to KB
Flexible Multi-layer Sparse Approximations of Matrices and Applications
The computational cost of many signal processing and machine learning
techniques is often dominated by the cost of applying certain linear operators
to high-dimensional vectors. This paper introduces an algorithm aimed at
reducing the complexity of applying linear operators in high dimension by
approximately factorizing the corresponding matrix into few sparse factors. The
approach relies on recent advances in non-convex optimization. It is first
explained and analyzed in details and then demonstrated experimentally on
various problems including dictionary learning for image denoising, and the
approximation of large matrices arising in inverse problems
Sparse approximations of protein structure from noisy random projections
Single-particle electron microscopy is a modern technique that biophysicists
employ to learn the structure of proteins. It yields data that consist of noisy
random projections of the protein structure in random directions, with the
added complication that the projection angles cannot be observed. In order to
reconstruct a three-dimensional model, the projection directions need to be
estimated by use of an ad-hoc starting estimate of the unknown particle. In
this paper we propose a methodology that does not rely on knowledge of the
projection angles, to construct an objective data-dependent low-resolution
approximation of the unknown structure that can serve as such a starting
estimate. The approach assumes that the protein admits a suitable sparse
representation, and employs discrete -regularization (LASSO) as well as
notions from shape theory to tackle the peculiar challenges involved in the
associated inverse problem. We illustrate the approach by application to the
reconstruction of an E. coli protein component called the Klenow fragment.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS479 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Sparse image representation with encryption
In this thesis we present an overview of sparse approximations of grey level images. The sparse representations are realized by classic, Matching Pursuit (MP) based, greedy selection strategies. One such technique, termed Orthogonal Matching Pursuit (OMP), is shown to be suitable for producing sparse approximations of images, if they are processed in small blocks. When the blocks are enlarged, the proposed Self Projected Matching Pursuit (SPMP) algorithm, successfully renders equivalent results to OMP. A simple coding algorithm is then proposed to store these sparse approximations. This is shown, under certain conditions, to be competitive with JPEG2000 image compression standard. An application termed image folding, which partially secures the approximated images is then proposed. This is extended to produce a self contained folded image, containing all the information required to perform image recovery. Finally a modified OMP selection technique is applied to produce sparse approximations of Red Green Blue (RGB) images. These RGB approximations are then folded with the self contained approach
Regression with Sparse Approximations of Data
Publication in the conference proceedings of EUSIPCO, Bucharest, Romania, 201
- …