4,138 research outputs found
Sparse Bayesian mass-mapping with uncertainties: hypothesis testing of structure
A crucial aspect of mass-mapping, via weak lensing, is quantification of the
uncertainty introduced during the reconstruction process. Properly accounting
for these errors has been largely ignored to date. We present results from a
new method that reconstructs maximum a posteriori (MAP) convergence maps by
formulating an unconstrained Bayesian inference problem with Laplace-type
-norm sparsity-promoting priors, which we solve via convex
optimization. Approaching mass-mapping in this manner allows us to exploit
recent developments in probability concentration theory to infer theoretically
conservative uncertainties for our MAP reconstructions, without relying on
assumptions of Gaussianity. For the first time these methods allow us to
perform hypothesis testing of structure, from which it is possible to
distinguish between physical objects and artifacts of the reconstruction. Here
we present this new formalism, demonstrate the method on illustrative examples,
before applying the developed formalism to two observational datasets of the
Abel-520 cluster. In our Bayesian framework it is found that neither Abel-520
dataset can conclusively determine the physicality of individual local massive
substructure at significant confidence. However, in both cases the recovered
MAP estimators are consistent with both sets of data
Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors
Compressed Sensing suggests that the required number of samples for
reconstructing a signal can be greatly reduced if it is sparse in a known
discrete basis, yet many real-world signals are sparse in a continuous
dictionary. One example is the spectrally-sparse signal, which is composed of a
small number of spectral atoms with arbitrary frequencies on the unit interval.
In this paper we study the problem of line spectrum denoising and estimation
with an ensemble of spectrally-sparse signals composed of the same set of
continuous-valued frequencies from their partial and noisy observations. Two
approaches are developed based on atomic norm minimization and structured
covariance estimation, both of which can be solved efficiently via semidefinite
programming. The first approach aims to estimate and denoise the set of signals
from their partial and noisy observations via atomic norm minimization, and
recover the frequencies via examining the dual polynomial of the convex
program. We characterize the optimality condition of the proposed algorithm and
derive the expected convergence rate for denoising, demonstrating the benefit
of including multiple measurement vectors. The second approach aims to recover
the population covariance matrix from the partially observed sample covariance
matrix by motivating its low-rank Toeplitz structure without recovering the
signal ensemble. Performance guarantee is derived with a finite number of
measurement vectors. The frequencies can be recovered via conventional spectrum
estimation methods such as MUSIC from the estimated covariance matrix. Finally,
numerical examples are provided to validate the favorable performance of the
proposed algorithms, with comparisons against several existing approaches.Comment: 14 pages, 10 figure
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Deep Learning for Single Image Super-Resolution: A Brief Review
Single image super-resolution (SISR) is a notoriously challenging ill-posed
problem, which aims to obtain a high-resolution (HR) output from one of its
low-resolution (LR) versions. To solve the SISR problem, recently powerful deep
learning algorithms have been employed and achieved the state-of-the-art
performance. In this survey, we review representative deep learning-based SISR
methods, and group them into two categories according to their major
contributions to two essential aspects of SISR: the exploration of efficient
neural network architectures for SISR, and the development of effective
optimization objectives for deep SISR learning. For each category, a baseline
is firstly established and several critical limitations of the baseline are
summarized. Then representative works on overcoming these limitations are
presented based on their original contents as well as our critical
understandings and analyses, and relevant comparisons are conducted from a
variety of perspectives. Finally we conclude this review with some vital
current challenges and future trends in SISR leveraging deep learning
algorithms.Comment: Accepted by IEEE Transactions on Multimedia (TMM
- …